Positive technology for elderly well-being: A review

Positive technology for elderly well-being: A review

ARTICLE IN PRESS JID: PATREC [m5G;March 23, 2019;16:40] Pattern Recognition Letters xxx (xxxx) xxx Contents lists available at ScienceDirect Patt...

888KB Sizes 4 Downloads 58 Views

ARTICLE IN PRESS

JID: PATREC

[m5G;March 23, 2019;16:40]

Pattern Recognition Letters xxx (xxxx) xxx

Contents lists available at ScienceDirect

Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec

Positive technology for elderly well-being: A reviewR Giuliano Grossi a, Raffaella Lanzarotti a,∗, Paolo Napoletano c, Nicoletta Noceti b, Francesca Odone b a

Dip. di Informatica, Università degli Studi di Milano, Via Celoria 18, Milano I-20133, Italy Dip. di Informatica, Robotica, Bioingegneria e Ingegneria dei Sistemi Università degli Studi di Genova, via Dodecaneso 35, Genova I-16146, Italy c Dip. di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano-Bicocca, Viale Sarca 336, Milano I-20126, Italy b

a r t i c l e

i n f o

Article history: Available online xxx Keywords: Elderly well-being Positive technology Computer vision Machine learning Intelligent cognitive assistants Emotional well-being Social interactions Ambient assisted living

a b s t r a c t In the last decades, given the necessity of assisting fragile citizens, of which elderly represent a significant portion, a considerable research effort has been devoted to the use of information and communication technologies (ICT) in daily living to promote activity, social connections, and independence. With similar purposes, in recent years psychologists proposed the novel paradigm of Positive Psychology (PP), the scientific study of positive human functioning and flourishing on multiple levels. The joint effort between ICT and PP has led to the definition of the emerging field of Positive Technology (PT), with the aim of developing technology consciously designed to foster well-being in individuals and groups. In this paper we review PT focusing on frameworks involving computer vision and machine learning for promoting cognitive, physical, emotional and social elderly well-being. Our discussion highlights a significant gap between theoretical needs and technological systems availability, suggesting future lines of research. © 2019 Published by Elsevier B.V.

1. Introduction Since the beginning of this millennium, psychologists have highlighted the necessity of a paradigm shift when dealing with the human psyche, overcoming traditional approaches framing mental conditions within well accepted “disease models”. In contrast to this negative attitude, positive psychology [1] has the goal of promoting human well-being, satisfaction and contentment, therefore improving quality of life. The literature identifies three main characteristics of human personal experience largely influencing the overall well-being [2]: (i) engagement/cognitive reasoning - the capability of satisfying activities and utilization of one’s strengths and talents, (ii) emotional/affective quality - the mood state, (iii) social/connectedness - the integration between individuals, groups, and organizations. In the last decades, researchers highlighted the importance of human strengths - such as optimism, perseverance and capacity for flow and insight - not only to prevent mental

R The authors whose names are listed immediately below certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakersbureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript. ∗ Corresponding author. E-mail address: [email protected] (R. Lanzarotti).

illness, but also to preserve the physical conditions [3,4]. It has been indeed observed that negative emotions impact on the cardiovascular system, stress alters immune system functioning, heart rate variability, and blood pressure, anger causes high blood pressure and high cholesterol, while depression is one of the primary causes of early death. More recently, several meta-analysis have been conducted to relate well-being to health, studying both short and prolonged effects. In [5] the authors show that well-being is positively related to both short and long-term health outcomes. Even more, in [6] positive psychological well-being has been associated with reduced mortality in both healthy and unhealthy subjects. More debatable results have been reported in [7], a metaanalysis where the interest was to relate PP (in particular self-help interventions) to subjective well-being, psychological well-being, and depression: while the authors confirm the positive effect of PP, even sustained over time, the effect intensity were in the small to moderate range. The reasons of these moderate effects are various and deserving further investigation, such as “more high-quality studies, and more studies in diverse (clinical) populations and diverse intervention formats to know what works for whom” [7]. These findings are particularly relevant for older people, a population share regularly increasing over the years with the life expectancy horizon in continuous growth. According to the World Health Organization, the world’s elderly population (defined as people aged 60 and older) has increased drastically in the past decades and will reach about 2 billion in 2050. In Europe, the

https://doi.org/10.1016/j.patrec.2019.03.016 0167-8655/© 2019 Published by Elsevier B.V.

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC 2

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

Fig. 1. Positive technology for elderly well-being. AmI stands for Ambient Intelligence, VR and AR for Virtual and Augmented Reality, ICAs for Intelligent Cognitive Assistants, AC for Affective Computing, AmE for Emotion-aware Ambient Intelligence, FER for Facial Expression Recognition, AASI for Automated Analysis of Social Interaction, FPV for First Person Vision.

percentage of the EU27 population above 65 years of age is foreseen to rise to 30% by 2060 [8]. Positive psychology can be employed to delay and attenuate the age-related declines in physical health, mental well-being and functional abilities, thus improving the quality of life [9–11]. This attitude extends the basic care, made of treatment of illness symptoms, shifting from a disease-centred model to a more complex citizen model. In the wave of the psychological founding, stimulated by the socio-economical needs of supporting an aging population, and taking advantage of progresses in sensor technology, Sander [12], Calvo et al. [13] and Riva et al. [2] defined a new paradigm that combines positive psychology with ICT: positive computing/technology (PT). The aim of such a new paradigm is to exploit technology for improving the quality of personal experience with the goal of increasing wellness, and generating strengths and resilience in individuals, organizations, and society [11]. To this purpose, PT relies on a person-centered and holistic approach, to promote well-being, considering three main spheres: physical and cognitive, social, emotional and affective. The aim of this review is to draw a complete picture of the state-of-the-art in this domain, with a specific reference to Computer Vision and Machine Learning methods contributing to elderly people’s well-being. We propose a thorough discussion on PT, tackling the concerned enabling technologies (virtual reality, ambient intelligence, facial expression recognition, people detection and many others), and their connection with the three well-being issues. Fig. 1 shows how we conceptualize the PT framework for the purpose of this review. The technologies we review often encompass more than one of the three main spheres of the PT framework. We schematize this concept by ordering the technologies along arches connecting the PT spheres. We observe that most of the technologies that enable PT are also essential for other related disciplines, such as AmbientAssisted Living (AAL) [14], Affective Medicine (AM) [4] and GeronTechnology (GT) [15]. Nevertheless, the aim of the PT is different from that of related disciplines. AAL is focused on the design and development of technologies for elderly and disabled people assistance at home thus allowing them to live comfortably at home, improving their autonomy, facilitating daily activities, ensuring good safety conditions, monitoring and treating a given disease. The aim of AM is to design and develop technologies that sense, recognize, and respond to certain aspects of human emotion. GT refers to

a much wider spectrum of methodologies, which may also influence the work of designers, manufacturers, as well as physicians and care givers. Its ultimate goal is to increase the quality of life in older adults. In contrast, PT is focused on elderly well-being by designing and developing technologies to: (1) induce positive and pleasant experiences; (2) support individuals in reaching engaging and self-actualizing experience; (3) support and improve social integration and/or connectedness between individuals, groups, and organizations [2]. For this reason, the view-point of our survey is substantially different from previews reviews such as [16–18], which consider AAL, or [19], focused on affective medicine. In the following sections we explore the recent contributions in ICT addressing the three spheres influencing human well-being: in Section 2 we discuss the cognitive and physical level, that is, techniques to support individuals in reaching engaging and self-actualizing experiences. In Section 3 we evaluate the emotional level, in particular those methods involved in either the evaluation of the affective quality (e.g. emotion recognition), or the induction of positive and pleasant experiences. In Section 4 the social level is investigated, with an analysis of the literature for the automated analysis of social interaction, and tracing their possible use to improve social integration and inclusion for elderly. Finally, in Section 5 we draw some conclusions discussing the level of maturity of the positive technology field. 2. Cognitive and physical well-being This section focuses on the aspects of well-being related to the cognitive and physical levels, that is how technologies, mostly based on Computer Vision, can be used to support individuals in reaching engaging and self-actualizing experiences. The concept of engaged life is one of the three pillars, as argued by Seligman [20], of the good life. The engaged life supports cognitive and physical well-being and it is achieved through engagement in satisfying activities and the utilization of one’s strengths and talents. Starting from the papers by Riva et al. [11] and Pal et al. [21], and from recent advancements in technologies for assistive health care [22], we have identified the following key technology domains: Ambient Intelligence (AmI), Virtual and Augmented Reality (VR and AR), Intelligent Cognitive Assistants (ICAs) and First Person Vision (FPV). 2.1. Ambient intelligence for elderly Ambient Intelligence (AmI) refers to hardware devices (sensors, gateways, computational boards, etc.) and software technologies (communication protocols, recognition algorithms, etc.) that make an environment sensitive and responsive to people presence [23]. The type of sensors, monitoring systems, and devices that are installed depend upon the application type and nature of support provided [24]. AmI, when devoted to older users, has the goal to design smart homes for them to live longer in their preferred environment, to improve the quality of their lives and to reduce costs for society and public health [25,26]. Smart homes are, on the basis of the usage scenario, designed for different purposes [21]: health monitoring [27], environment monitoring [25], social communication [28], providing companionship [29], recreation and entertainment [30]. Elderly people consider wearable sensors more invasive and unpleasant than environmental sensors [21,31]. For this reason, in the context of smart homes, Computer Vision technologies are of great interest [32] and they are exploited for activity analysis and recognition [30,33] fall detection [34], sleep monitoring [35], food recognition [36], and gesture recognition [37]. Specifically, activity monitoring is defined depending on the level of complexity of the activity and includes: human pose

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

ARTICLE IN PRESS

JID: PATREC

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

monitoring, action recognition (as a sequence of postures) and interaction between people (for instance 2 individuals) [38]. Several methods for pose monitoring for elderly have been proposed [39]. The current state-of-art in human pose estimation is the approach proposed by Guler et al. [40] that exploits a variant of a Region-based Convolutional Neural Network (CNN) to map all human pixels of an RGB image to the 3D surface of the human body. Action recognition has been largely explored in the literature, the interested reader may refer to [41,42] for a complete overview. All the current state-of-art is based on Deep Learning: action recognition from still-images [43], action recognition from spatio-temporal cuboid of images (such as the C3D model) [44], action recognition exploiting the estimation of human skeleton [45]. Fall detection is a very relevant objective in the smart home literature, readers can refer to [46] for an in depth survey. Fall detection methods are divided in two classes: fall detection treated as classification problem (fall vs no-fall) [47] and fall detection as unusual activity (anomaly detection) [34]. A good quality of sleep is an important indicator of patient health and state of wellness. Often the quality of sleep is reduced due to sleep disorders such as apnea or due to other diseases that cause restlessness [48]. Vision-based approaches are considered less intrusive than traditional no-vision based sensors based on technologies like polysomnography, such as wearable bracelets and not wearable strips or bands [38]. Vision-based methods monitor the wake/sleep status and are classified in three main groups: one that makes use of infrared cameras for monitoring physiological parameters such as breathing rate [49], a second group that uses cameras to estimate body movements, and a third group that exploits infrared cameras to monitor the breathing rate and motion sensors to detect movements of the body [50]. A range of novel technologies have been developed in order to support humans in tracking their food consumption and increasing the awareness of what they eat. Several mobile applications for food intake assessment and logging are available, but they require manual and tedious input of the food intake. Many recent works in the Computer Vision domain have addressed the challenge of food recognition, quantity estimation and leftover estimation. The most promising approaches for food recognition are based on custom or pretrained CNNs [51,52]. Food quantity estimation, that includes also the task of leftover estimation, has been considered by exploiting both hand-crafted and learned features [36,53]. In these approaches a reference information is used to estimate the quantity of food on the plate. This reference may come from markers or tokens for camera calibration, such as the size of reference objects (e.g. thumb, or eating tools). Other works use 3D techniques coupled with template matching or shape reconstruction algorithms [54]. Activity of Daily Living in elderly has been assessed through vision-based gesture recognition; here approaches based on depth sensors [55] and RGB cameras have been proposed [56]. The most promising approaches are based on CNNs and are very similar to the ones adopted for action recognition: frame based, spatiotemporal based and hand detection oriented [37]. 2.2. Virtual reality, augmented reality Virtual Reality (VR) environments can facilitate positive emotions as well as well-being in elderly [11]. VR is a technology that involves hardware devices and software that are able to generate virtual, plausible, immersive and realistic environments: people feel to be in the virtual world in both mind and body [57,58]. Several VR applications demonstrated to be very effective in the health domain, readers can refer to [59] for a complete overview.

3

Computer Vision technologies are employed in the context of VR to enrich the subjective experience. People, with the support of body, head, hands, eyes, face tracking can achieve non-immersed interactions (where the users are not represented within the virtual world), semi-immersive interactions (where a user is represented in the virtual world as an avatar) and fully-immersive interactions (where the user interacts from the perspective of being inside the virtual world) [60]. Augmented Reality (AR) is a recent technology that permits to display a virtual layer of information or graphic objects (or both) on the top or a real world captured through a vision-based device camera [61]. AR has been largely employed [62] to enhance cultural heritage experiences [63], to enrich gaming experiences [64], to improve education performance [65], to increase the quality of life of elderly [66], as well as to handle chronic health conditions [67]. 2.3. Intelligent cognitive assistants People with cognitive disabilities may need continuous assistance also for completing daily-life tasks, such as washing or getting dressed. Intelligent Cognitive Assistants (ICAs) have been introduced to this purpose. They provide monitoring functionalities to recognize people activities and possibly trigger a prompt assistance if needed. The emotional and affective dimension is taken into strong consideration, to favor the interaction between humans and technology. On this respect it is worth mentioning a system called COACH (Cognitive Orthosis for Assisting with aCtivites in the Home) is introduced in [68] to support people affected by Alzheimer’s disease. The authors consider the problem of monitoring people while washing their hands using a video camera, and design a strategy to understand when they loose tracks and thus need assistance, which is provided using audio indications. Although of interest, this system has a poor capability of generalizing to different subjects and scenes. Another important aspect is the relationship between emotions and cognitive well-being which is considered by many studies, highlighting the necessity of incorporating emotions in pervasive systems. In this vein, Malhotra et al. [69] propose virtual humans with facial expressions which can be clearly understood by elderly people. The intent is to provide prompts emphatically aligned to the emotions of people having cognitive disabilities and needing assistance. The same authors have proposed a novel emotionally intelligent cognitive assistant [70] which combines artificial intelligent controllers with a model of the dynamics of emotion and identity called Affect Control Theory (ACT) arising from the sociological literature on culturally shared sentiments. In this framework body posture is observed during hand washing activities to infer both functional and affective meaning through his behaviour. The collected information is fed into a reasoning system making decisions based on a probabilistic process. Mathematically, ACT presents a maximum likelihood solution in which optimal behaviours or identities can be predicted based on past interactions. More interesting is a recent probabilistic and decision-theoretic generalization of ACT, called BayesACT [71], which leverages on the same principles, but turns out to be more expressive because in its probability distribution it maintains simultaneously multiple hypotheses about behaviours and identities. A relevant aspect of this theoretical work is a characterization of affectively intelligent artificial agents. Indeed, the agent is able to select its strategy of action in order to maximize the expected outcomes based both on the application state and on its emotional alignment with the human. In the paper [70] there is an explicit use of BayesACT combined with POMDP to model of the handwashing task. As a final result, authors assert that the actions suggested by BayesAct work well across the different identities that the client may have.

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC 4

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

2.4. First person vision Among the emerging assistive technologies, it is worth mentioning a relatively recent framework arising in Computer Vision known as First Person (egocentric) Vision (FPV) [22,72,73]. Its aim is to understand the user’s environment, his/her behaviour and intent through wearable sensors, such as action cameras or smart glasses, producing artificial intelligent systems acting as in first person: “it sees what the user sees and looks where the user is looking” [72]. The interest in this field is increasing because it offers several benefits associated with the long-term monitoring of human activities in home or community living. In particular, non-intrusive wearable cameras are suitable to gather visual information about environment or objects within the scene allowing to help aging population or disabled subjects to better focus their attention or to improve their interaction with objects [74] or people [75], as well as to accomplish ordinary tasks or specific goals [76,77]. Recent video cameras such as MeCam, GoPro, Google Glass or Looxcie, are already applied in the field of the health care and assistive technologies both to collect first person video [78,79] and to support independent and healthy living of older and impaired people so as to improve their quality of life (see [80] for an overview and references therein). Specifically, one of the primary interests of wearable computers is to recover general user context. This increase in contextual information opens to the more general problem of environment understanding where people move or act while performing daily living activities. A key step to acquire this knowledge is recognizing personal locations (such as office, kitchen, living room) from egocentric videos [81–83]. Another challenge task in this field is to understand human activities from a first person perspective, judging from the wide range of applications it takes into account, including ambulatory activities (e.g., running or walking), person-to-object or personto-person interaction (e.g., typing or hugging) [74,75,77,84,85]. For example, a novel pervasive system to recognize human complex daily living activities from a reading glasses integrating a 3-axis accelerometer and a first-person view camera is developed in [77]. The aim is to improve the quality of life of the aged mostly by making daily activities of living safer and easier to complete. The system has been validated on healthy elderly people and a mixed patient population with recent disabilities using a dataset containing an average of 30-min sensors recording of realistic sequential in home or public environment referring to activities like walk, upstairs, drink, stand-up, sit-down. The method has improved the performances over conventional classification approaches by an average of 20%÷40%. Finally, egocentric systems are also able to assist users and augment their ability to recognize objects in order “to predict which objects the user is going to interact with next” from egocentric videos, also referred as “next-active-object prediction”, as stated in [74,86]. In the literature, active objects are considered as objects being manipulated by the user and provide important information about the action being performed (e.g., using the kettle to boil water), while passive objects are non-manipulated objects and provide context information such as the items in a kitchen. 3. Emotional well-being This section considers the emotional level of well-being specifically. To this aim technology can be employed in two different ways. First, to detect the emotional state of people, second to automatically induce positive emotions. This sphere is closely related with the concept of Affective Computing (AC) [87], the discipline concerning with “emotion interactions with and through

computer”. Several surveys have been produced covering this domain [88–90]. Certainly, AC is a wide interdisciplinary area. Merely referring to the emotion recognition task, several human features are concerned such as speech, face expression, body gestures, and internal physiological changes such as blood pressure, heart rate variability, or respiration, and each of them constitutes a research discipline largely explored. The most valuable speech emotion recognition systems are reviewed in [91], Facial Expression Recognition (FER) is surveyed in [92,93], body expression perception and recognition are tackled in [94–96], multimodal affect recognition techniques based on physiological computing and data fusion are dealt in [97–100]. Indeed, a survey on AC is out of the scope of this paper: a further specialization is deserved, investigating those contributions conceived specifically for older adult well-being. Moreover, we further restrict the field of research to those emotion recognition methods relying on visual information (such as facial expression, body posture/movement) being the most evident human-like way to communicate emotions. The restriction on older adult users is suggested by the proven consideration that human aging affects computational systems such as FER [101,102], body movements [103], speech [104], and physiological measures [105], so that general purpose method could be unsuitable to this specific domain. In addition to the emotion recognition phase, the virtuous loop of inducing positive emotions requires the setup of methodologies for fostering positive emotions in elderly. In the psychological domain, a large research effort has been devoted to investigate this issue, introducing mood induction procedures (MIPS) based on the use of environmental light changes [106], emotion-evoking videos [107], video-clip [108], virtual reality [109], music [110,111], and multimodal procedures [112–114]. Here we focus on integrated technological systems that include both the emotion recognition phase and the MIPS. 3.1. AmE: emotion-aware ambient intelligence A very first specificity to attain the promotion of positive emotions concerns the enrichment of smart environments as introduced in Section 2.1 with actuators, so as to have AmI responsive to human needs, habits, gestures, and emotions [23]. This new concept has been coined as Emotion-aware ambient intelligence (AmE). In the literature some proofs of concept have been presented, even though they do not propose specific models for elderly wellbeing. In [115] the authors propose a generic, open and adaptable AmE architecture for emotion detection and regulation. This architecture is conceived to capture and integrate physiological signals, face expression and body movement, the first two aiming at determining the emotional state, while the third to detect the level of activation of the monitored person. On the basis of his/her emotional state, music and colour/light actuation should be performed by means of specific actuators for driving the subject to a pleasant state of mood. In [116] a three layer AmE architecture is introduced. It consists of: simple and dedicated devices that act as sensors collecting the information about the person’s health/emotional status, a decision maker with more powerful computing resources, and an actuator. The actuator in this case should be a simple alert to caregivers. In [103] a unified model promoting the well-being of the elderly living at home is proposed. It takes into account three aspects of well-being: health, activity, and emotions. Concerning emotions, it considers a smiling count to estimate a measure of happiness. Such measure would then be integrated with health and activity coefficients, in order to derive a well-being rating, conditioned on a priori knowledge of the person (e.g. his/her attitude to movement) so that the measure would be personalized. On the basis of it, the AmE would propose either an activity,

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

turns on a display, projects photographs that are known to make the user happy, or alerts the caregivers. Two important factors to address in designing and implementing an AmE are the bandwidth and the computational process, both to be kept low [117].

3.2. Face expression recognition (FER) systems for older adults Among the features revealing the emotional state in older people, the most widely investigated is face expression, deserving a specific dissertation. Despite the wide literature concerning FER in generic situations, we identified only a few FER systems conceived for older adults, all sharing the aim of classifying facial expressions in one of the 7 fundamental emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise, Neutral). Most of them carry out their tests on standard face expression databases such as JAFFE (Japanese Female Facial Expression) [118] and Cohn-Kanade (CK) database [119], while only a few consider real home setup, or at least datasets of older adults. A typical pipeline adopted in these works consists in extracting some local characterization of the face, followed by the application of a classifier. In [120] and [121] the face characterization is obtained adopting active shape models, while the classification is carried out by Support Vector Machines (SVM) and K-Nearest Neighbor (K-NN) classifiers respectively. In [116] a AmE system is proposed, including a FER that extracts 33 fiducial points applying a tracking algorithm, and then classifying the emotions with an Ensemble Classifier constituted by K-NN, Decision Tree, Fuzzy Logic, Bayesian Networks and SVM. In [122] the authors propose a healthcare system focused on emotional aspects, that recognizes emotions by face and speech, and promotes positive emotions consequently to collection of negative emotional state. The FER is based on RGB videos processing, consisting in the extraction of direction ternary pattern (DTP), and the application of a SVM. Multiscale approaches are proposed in [117,123], the first adopting a multiscale Weber local descriptor applied to image sub-blocks followed by a Fisher discriminant analysis, and a SVM classifier. The second adopts a bandlet transform followed by the center-symmetric Local Binary Pattern feature extractor applied on sub-blocks to preserve locality. Each block produces a histogram that is weighted according to the entropy of the corresponding block, and concatenated with the other histograms worked out on the image at hand. A feature selection is carried out based on the Kruskal–Wallis test, and two classifiers, the Gaussian mixture model and the SVM, are applied and their scores combined to determine the final classification. In [124] depth sensor-based video camera images are adopted, applying a novel feature extraction method called Local Directional Position Pattern. This feature, unlike typical local directional patterns, adds to the edge strength the sign information, very relevant for the expression recognition. The obtained features are refined by applying the principal component analysis and the generalized discriminant analysis. The discrete classification is obtained applying a Deep Belief Network to the produced features. An end-to-end approach is proposed in [125], implementing a deep learning method for FER in older adults based on Stacked Denoising Auto-Encoder. Validation and tests have been conducted on age-expression public datasets: the FACES dataset [126] and Lifespan dataset [127]. In [128] the authors highlighted the necessity to capture longerterm affective states (mood), rather than instantaneous emotional fluctuations, when intending to drive automatically mood induction procedures such as ambient light intensity and color regulation. The authors propose a system to automatically predict the mood as perceived by other humans on the basis of punctual (frame-by-frame) emotion estimations adopting a model that has

5

memory of the last recognized emotions, yet exponentially discounts their importance in the overall mood prediction. 4. Social interaction The automated analysis of social interaction (AASI) based on visual cues has been studied in the literature, often with application to video-surveillance, robotics, social signal processing, and human-human interaction analysis. Although focused on robotics applications, the very recent survey [129] provides a general view on computational approaches for perceiving humans and their interactions. We notice, instead, a rather limited set of contributions in the analysis of social interaction in the context of well-being assessment (see e.g. [27]). This is in contrast to statements that can be found in the medical literature, according to which social participation is a key determinant of healthy aging and therefore an important emerging intervention goal for health professionals [130]. Although there is not a common definition of social participation, it has been experimentally observed it is mostly related to the involvement of a person in activities providing some level of interactions with others. The authors in [130] propose an interesting taxonomy of social activities, which we summarize in the following: Being with others, that is sharing space but not necessarily interacting; Interacting with others, that is socializing without carrying out any specific activity; Participating to a common activity; Helping others. These different activities are associated with different types of involvement: in the first case we may have individuals that are carrying out independent activities occurring in parallel, in the other cases we actually experience some form of interaction, more and more subtle and complex to estimate as the participation becomes deeper. It should be noticed that not all tasks have been extensively addressed in the field of automated analysis for social interaction and that none of these approaches is specifically meant for analysing older users. Thus, our discussion is based on methods that seem to have a potential to be applied to our reference case. In this vein, it should be noticed that two main issues should be dealt with if adopting a general model in the elderly domain: first, the peculiarities of mobility and social interaction and second the specificity of classical indoor domestic environments. In the former we observe that social interaction analysis methods may incorporate motion and attention cues which are different in young and elderly adults. All available models build on data-set of young people, as they are much easier to gather. The specialization to the case of elderly would require in all cases an adaption to behaviour and kinematics of elderly. In the latter, we recommend specific care in considering that some cues of interaction are not necessarily vision-based (e.g., the case of two people sitting very apart to one another, engaging different activities, while chatting, is the typical interaction that could be spotted by audio analysis) and we observe how domestic environment presents very specific computational challenges, such as large occlusions and small interaction areas, that should also be addressed. In the reminder of the section we organize Computer Vision methods potentially contributing to different layers, highlighting open problems and research gaps. 4.1. Being with others A specific line of research aims at analysing groups of individuals sharing space and undertaking common activities, although not necessarily interacting. In social sciences, this is sometimes referred to as unfocused interaction [131]. In this context, we may start observing how to identify groups of individuals in the scene to measure the individual inclination towards a social environment. Here we are mainly interested in analysing relatively small groups, coherently with what reported in [130].

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC 6

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

In the literature, the detection of groups involves people detection and pose estimation. As the analysis covers the temporal dimension, multi-target tracking may be a crucial aspect to consider [132]. Occlusions are a very significant technical challenge which can be addressed leveraging contextual information, including human social behaviour [133]. Social context is also used as additional information to improve activity recognition; a common treat in this line of research is to devise spatio-temporal local descriptors of the “social” context, to capture the distribution of people around an individual with respect to position, orientation and motion [134,135]. This concept has also been extended to the case of ego-vision systems [136]. Joint activity classification [137] and temporal causalities [138] are also useful to accumulate dynamic information of groups. 4.2. Interacting with others Interaction or, more specifically, focused interaction analysis, occurs when different individuals are co-present, have a common focus of attention, and a face-to-face engagement [131]. A large body of vision-based research in this field has been produced considering constrained settings, such as meeting scenarios. More recently we observed a growing interest on unconstrained environments, in this case the amount of clutter in the scene and in the quality and quantity of dynamic events may grow and more specific preprocessing tasks need to be addressed. A notable example of this is Free-standing Conversational Groups analysis (see [139] and references therein). When addressing well-being and social interaction, constrained and unconstrained settings are both of interest. Automatic analysis of social interaction usually includes the reconstruction of the group structure (mutual positions) often leveraging the estimation of head/body pose and orientation [140]. Then, it may also consider higher level tasks, involving activities or even the emotional sphere of the individual or the group itself. A specific case is the one of dyadic (two people) interactions. Here a first step consists in detecting pairs of people engaging an interaction, for instance looking at each other [75,141,142]; dyadic interaction classification is addressed in [143], while the goal in [144] is to recognize human interactions and the way they affect each other using motion co-occurrence. Groups of interacting people, or structure groups, are studied in [145]. Here the authors focus on the way people spatially interact with each other (e.g. people facing each other to talk, sit on a bench side by side, stand alone). They propose to learn an ensemble of discriminative interaction patterns to encode the relationships between people in 3D and introduce a novel efficient iterative augmentation algorithm for solving the underlying inference problem. Social interaction in groups is also studied in [146], where multiple poses are estimated jointly. The problem of detecting social interactions is particularly challenging in unconstrained settings. Of special interest is the detection of the so-called F-formations. According to the well-known framework proposed by Kendal in [147] F-formations are configurations where “... two or more people sustain a spatial and orientational relationship in which the space between them is one to which they have equal, direct and exclusive access”. In the recent years various approaches to detect F-formations have been proposed [148– 150]. Benchmark datasets, including the rich multimodal SALSA dataset [139], have been used to compare the proposed methods, showing state-of-art precisions ranging from 0.66 to 0.87 depending on the benchmark [140]. Once signs of interaction are detected, groups dynamics is evaluated. In this case a classical application is the analysis of meetings. The authors of [151] propose an event-based dynamic context model to analyze group interactions at multiple abstraction levels. Multilevel events are defined in compliance with the

context hierarchy to form inter-weaved context-event hierarchy. In [152] the authors model individual and group actions with the purpose of recognizing sequences of human interaction patterns and structuring them in semantic terms. Another key aspect in interaction analysis is the focus of attention. Again focusing on meeting scenarios, the work in [153] introduces a contextual model for the recognition of people’s visual focus of attention in meetings from audio-visual perceptual cues. They propose to jointly recognize the participants’ visual attention in order to build context-dependent interaction models that relate to group activity and the social dynamics of communication. Human interactions are recognized from videos in [154] learning interactive phrases, high-level descriptions of motion relationships between interacting people. The authors propose a discriminative model to encode interactive phrases. They also discover data-driven phrases able to differentiate human interactions. An information-theoretic approach is employed to learn the data-driven phrases. Key poses can also be used to represent interactions [155]. Interaction prediction is an important trigger for the prevention of dangerous events. The problem is considered in [156], where the authors propose a method based on a deep architecture fed with flow coding images. 4.3. Building social relationships At a higher level, it may be useful to analyse specific social dynamics within a group, to assess independence or even an attitude to helping others. To this purpose, it may be useful to analyse interpersonal relationships, defining associations (warm, friendliness, dominance,...) between two or more people, or social roles [129,157,158]. Among the various nuances this task may take, of particular interest are the detection of fine-grained and interpersonal relations traits [159]. In a very recent work [160] interpersonal relations are inferred from facial images using a Deep Convolutional Network embedded in a multitask framework capable of learning face representations and, in addition, a number of auxiliary attributes as head pose, gender, and age. In [161] the authors build a weighted graph populated with people descriptions where arcs represent relationships among them. The method determines leading and supporting roles, and segments the video according to them. Again on videos, the work in [162] presents an approach to construct social networks from low-level audio-visual features, and understand the presence of communities from them. In [157] the authors leverage existing large-scale visual concepts detectors to represent videos and detect social structures. A Probabilistic Graphical Model with temporal smoothing allows to analyze and evolve social relations among actors. The interesting approach in [163] introduces a hierarchical classifier learned on a fully supervised domain to encode actions, role-based unary components, pairwise roles, and group activities. Information on social relations represents an intermediate product of the analysis. A semi-supervised approach is instead considered in [164], where the authors propose a method to understand social roles emerging from human-human and human-object interactions. Higher level behavioral determinants are included in the review analysis presented in [139], as a complement to the publication of the SALSA dataset. 4.4. Beyond analysis The overview reported in this section covers primarily Computer Vision and Machine Learning methods for assessing the social attitude and social role of individuals. We conclude the section with a reference to methods and technology whose aim is to

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

improve or to stimulate sociality. One possibility is to resort to social assistive agents: in particular, Social robotics refers to a branch of robotics aiming at the design and development of artificial agents that may be perceived as social entities with communication capabilities [165]. Their main functions are either to enhance health and psychological well-being or to support daily activities related to independent living. For instance, in [166] the authors discuss a socially assistive robot that aims to motivate and engage elderly users in physical exercise and social interaction to support physical and cognitive healthcare needs. In [167,168] instead the use of robotics platforms to help addressing social and emotional needs of elderly (reducing depression and increasing social interaction with peers) is investigated. It has been experimentally shown that seniors who participated in a group were more likely to interact socially with each other when a robot was present and powered on, than when it was powered off or absent [168].

7

longed use of positive technologies as well as the design of technologies based on user-experience and easy to use by older adults. Privacy. When designing intelligent monitoring systems a crucial issue to tackle is how to protect and preserve people’s privacy. This is true in general, but even more essential for systems that process visual information. A recent European Union regulation (that is the EU General Data Protection Regulation - GDPR) introduced specific policies to handle sensitive data for scientific research purposes. The most important issues introduced by such a regulation regard the use of organizational and technical safeguards, such as data pseudonymization, new rights to data subjects, such as the “right to be forgotten” and the right to data portability. This highlights the fact that new technologies must be designed and employed to demonstrate the compliance to the Regulation. As far as we know, the effort employed by the computer vision community to handle data protection issues, while retaining the quality of the visual information adequate for further processing, is still low [170].

5. Conclusion and discussion References Recent years have witnessed a rapidly growing interest in assisted living technologies and the subsequent attempt to develop intelligent systems for the positive technology (PT) paradigm. The purpose of this study was to draw a complete picture of the stateof-the-art in this domain, with a special reference to Computer Vision and Machine Learning methods contributing to elderly people well-being. The study highlighted significant scientific and technological advancement, which may contribute effectively to promote physical, cognitive, emotional, and social well-being. Nevertheless the PT domain certainly deserves further analysis and field testing to show the PT effectiveness in supporting long-term well-being. In this concluding session we identify some complementary important aspects we believe would need further investigation. Cross-fertilization. From our review analysis clearly emerges a lack of communication between psychologists and the ICT scientists. Significant advances have been achieved by both communities — psychologists developed several theories about human wellbeing, as computer scientists proposed effective machine learning and computer vision methods for behaviour understanding and human-centered technologies — but there are very few multidisciplinary approaches making the best out of the two. Data. Current machine learning methods require large quantities of data reflecting the specificity of the application domain. Ad hoc data, especially for older people, are difficult to collect in house or to gather from common data sources such as the web or social networks. We suggest the necessity of a common joint effort of multiple institutions, involving hospitals, health-care facilities, and social workers, to obtain a consistent set of multi-modal annotated data, covering different aspects of physical, cognitive, social, and emotional well-being. Acceptance. Nowadays, all services and resources are digital, but this is critical for elderly because the use of digital technologies can increase their isolation from society (digital divide). As confirmed by previous studies, old people are slower than young people in learning new technologies and more important they feel low self-efficacy, anxiety and hostility when such technologies are difficult to be used. In contrast, old people are encouraged to adopt technologies when they perceive them as useful and beneficial for their well-being [21]. For instance, a recent study evaluated the impact of VR experience on older adults’ well-being and showed that the VR intervention had overall positive effects on participants’ social and emotional well-being and more important that the participants reported being satisfied with and accepting of the technology and its contents [169]. However, these findings do not prove their effectiveness in the case of long-term use. We believe that one of the future challenges will be to study the effects of pro-

[1] M.E. Seligman, M. Csikszentmihalyi, Positive psychology. an introduction, Am. Psych. 55 (1) (20 0 0) 5–14. [2] G. Riva, R.M. Banos, C. Botella, B.K. Wiederhold, A. Gaggioli, Positive technology: using interactive technologies to promote positive functioning, Cyberpsych. Behav. Soc. Netw. 15 (2) (2012) 69–77. [3] S. Stewart-Brown, Emotional wellbeing and its relation to health: physical disease may well result from emotional distress, BMJ 317 (7173) (1998) 1608. [4] R.W. Picard, Affective medicine: technology with emotional intelligence, Stud. Health Technol. Inform. (2002) 69–84. [5] R.T. Howell, M.L. Kern, S. Lyubomirsky, Health benefits: meta-analytically determining the impact of well-being on objective health outcomes, Health Psychol. Rev. 1 (1) (2007) 83–136. [6] Y. Chida, A. Steptoe, Positive psychological well-being and mortality: a quantitative review of prospective observational studies, Psychosom. Med. 70 (7) (2008) 741–756. [7] L. Bolier, a.G.W.M. Haverman, H. Riper, F. Smit, E. Bohlmeijer, Positive psychology interventions: a meta-analysis of randomized controlled studies, BMC Public Health 13 (1) (2013) 1–20. [8] K. Giannakouris, Ageing characterises the demographic perspectives of the european societies, Stat. Focus 72 (2008) 1–11. [9] F. Clark, J. Jackson, M. Carlson, C.P. Chou, B.J. Cherry, M. Jordan-Marsh, B.G. Knight, D. Mandel, J. Blanchard, D.A. Granger, et al., Effectiveness of a lifestyle intervention in promoting the well-being of independently living older people: results of the well elderly 2 randomised controlled trial, J. Epidemiol Comm. Health 66 (9) (2012) 782–790. [10] E. Diener, M.Y. Chan, Happy people live longer: subjective well-being contributes to health and longevity, Appl. Psychol. 3 (1) (2011) 1–43. [11] G. Riva, R.M. Banos, C. Botella, B.K. Wiederhold, A. Gaggioli, Positive technology for healthy living and active ageing, Act. Ageing Healthy Living 203 (2014) 44. [12] T. Sander, Positive Computing, in: Positive psychology as social change, Springer, 2011, pp. 309–326. [13] R.A. Calvo, D. Peters, Positive Computing: Technology for Wellbeing and Human Potential, MIT Press, 2014. [14] C. Martini, A. Barla, F.O. Odone, A. Verri, G.A. Rollandi, A. Pilotto, Data-driven continuous assessment of frailty in older people, Front. Dig. Hum. 5 (2018) 6. [15] H. Bouma, J.L. Fozard, D.G. Bouwhuis, V. Taipale, Gerontechnology in perspective, Gerontechnology 6 (4) (2007) 190–216. [16] P. Rashidi, A. Mihailidis, A survey on ambient-assisted living tools for older adults, IEEE J. Biomed. Health Inf. 17 (3) (2013) 579–590. [17] M. Memon, S.R. Wagner, C.F. Pedersen, F.H.A. Beevi, F.O. Hansen, Ambient assisted living healthcare frameworks, platforms, standards, and quality attributes, Sensors 14 (3) (2014) 4312–4341. [18] A. Queirós, A. Silva, J. Alvarelhão, N.P. Rocha, A. Teixeira, Usability, accessibility and ambient-assisted living: a systematic literature review, Universal Access in the Information Society 14 (1) (2015) 57–66. [19] A. Luneski, E. Konstantinidis, P. Bamidis, Affective medicine: a review of affective computing efforts in medical informatics, Methods Inf. Med. 49 (03) (2010) 207–218. [20] M.E. Seligman, Authentic happiness: using the new positive psychology to realize your potential for lasting fulfillment, Simon Schuster (2002). [21] D. Pal, T. Triyason, S. Funilkul, W. Chutimaskul, Smart homes and quality of life for the elderly: perspective of competing models, IEEE Access 6 (2018) 8109–8122. [22] M. Dimiccoli, Computer vision for egocentric (first-person) vision, in: Comp. Vision for Assistive Healthcare, Elsevier, 2018, pp. 183–210. [23] G. Acampora, D.J. Cook, P. Rashidi, A.V. Vasilakos, A survey on ambient intelligence in health care, Proc. IEEE 101 (12) (2013) 2470–2494. [24] D.J. Cook, How smart is your home? Science 335 (6076) (2012) 1579–1581.

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC 8

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

[25] N. Labonnote, K. Høyland, Smart home technologies that support independent living: challenges and opportunities for the building industry–a systematic mapping study, Intell. Build. 9 (1) (2017) 40–63. [26] T. Kleinberger, M. Becker, E. Ras, A. Holzinger, P. Müller, Ambient Intelligence in Assisted Living: Enable Elderly People to Handle Future Interfaces, in: Int. Conf. universal access in human-computer interaction, 2007, pp. 103–112. [27] J. Vlasselaer, C.F. Crispim-Junior, F. Bremond, A. Dries, Behave - behavioral analysis of visual events for assisted living scenarios, in: IEEE ICCV, 2017, pp. 1347–1353. [28] S. Lauriks, A. Reinersmann, H.G.V.d. Roest, F. Meiland, R.J. Davies, F. Moelaert, M.D. Mulvenna, C.D. Nugent, R.M. Dröes, Review of ICT-based services for identified unmet needs in people with dementia, Ageing Res. Rev. 6 (3) (2007) 223–246. [29] J. Broekens, M. Heerink, H. Rosendal, et al., Assistive social robots in elderly care: a review, Gerontechnology 8 (2) (2009) 94–103. [30] D. Fischinger, P. Einramhof, K. Papoutsakis, W. Wohlkinger, P. Mayer, P. Panek, S. Hofmann, T. Koertner, A. Weiss, A. Argyros, et al., Hobbit, a care robot supporting independent living at home: first prototype and lessons learned, Rob. Auton. Syst. 75 (2016) 60–78. [31] F. Veronese, H. Saidinejad, S. Comai, F. Salice, Elderly Monitoring and AAL for Independent Living at Home: Human Needs, Technological Issues, and Dependability, in: Optimizing Assistive Technologies for Aging Populations, IGI Global, 2016, pp. 154–181. [32] M. Leo, G. Medioni, M. Trivedi, T. Kanade, G.M. Farinella, Computer vision for assistive technologies, Comp. Vis. Image Understanding 154 (2017) 1–15. [33] C. Martini, N. Noceti, M. Chessa, A. Barla, A. Cella, G.A. Rollandi, A. Pilotto, A. Verri, F. Odone, A visual computing approach for estimating the motility index in the frail elder, in: Int. Conf. on Comp. Vis., Im. and Comp, Graph, INSTICC, 2018, pp. 439–445. [34] M. Yu, A. Rhuma, S.M. Naqvi, L. Wang, J. Chambers, A posture recognition-based fall detection system for monitoring an elderly person in a smart home environment, IEEE Trans. Inf. Tech. Biomed. 16 (6) (2012) 1274–1286. [35] E. Lupiani, J.M. Juarez, J. Palma, R. Marin, Monitoring elderly people at home with temporal case-based reasoning, Knowl. Based Syst. 134 (2017) 116–134. [36] G. Ciocca, P. Napoletano, R. Schettini, Food recognition and leftover estimation for daily diet monitoring, in: Int. Conf. on Image Analysis and Processing, 2015, pp. 334–341. [37] F. Mueller, F. Bernard, O. Sotnychenko, D. Mehta, S. Sridhar, D. Casas, C. Theobalt, Ganerated hands for real-time 3d hand tracking from monocular rgb, arXiv:1712.01057. [38] S. Sathyanarayana, R.K. Satzoda, S. Sathyanarayana, S. Thambipillai, Vision-based patient monitoring: a comprehensive review of algorithms and technologies, J. Ambient Intel. Hum. Comp. 9 (2) (2018) 225–251. [39] D. Brulin, Y. Benezeth, E. Courtial, Posture recognition based on fuzzy logic for home monitoring of the elderly, IEEE Trans. Inf. Tech. Biomed. 16 (5) (2012) 974–982. [40] I.K.R.A. Güler, Natalia neverova, densepose: dense human pose estimation in the wild, arXiv:1802.00434. [41] S. Nigam, R. Singh, A. Misra, A review of computational approaches for human behavior detection, Arch. Comp. Meth. Eng. (2018) 1–33. [42] S. Herath, M. Harandi, F. Porikli, Going deeper into action recognition: a survey, Image Vis. Comput. 60 (2017) 4–21. [43] J. Carreira, A. Zisserman, Quo vadis, action recognition? a new model and the kinetics dataset, in: IEEE CVPR, 2017, pp. 4724–4733. [44] D. Tran, L. Bourdev, R. Fergus, L. Torresani, M. Paluri, Learning spatiotemporal features with 3d convolutional networks, in: IEEE ICCV, 2015, pp. 4489–4497. [45] S. Song, C. Lan, J. Xing, W. Zeng, J. Liu, An end-to-end spatio-temporal attention model for human action recognition from skeleton data, in: AAAI, 2017, pp. 4263–4270. [46] Z. Zhang, C. Conly, V. Athitsos, A survey on vision-based fall detection, in: Int. Conf. on Perv. tech. Rel. Ass. Env., 2015, p. 46. [47] N. Lu, Y. Wu, L. Feng, J. Song, Deep learning for fall detection: 3d-CNN combined with LSTM on video kinematic data, IEEE J. Biomed. Health Inf. (2018). (forthcoming) [48] R.S. Leung, T.D. Bradley, Sleep apnea and cardiovascular disease, Am. J. Resp. Crit. Care Med. 164 (12) (2001) 2147–2165. [49] C.W. Wang, A. Hunter, N. Gravill, S. Matusiewicz, Unconstrained video monitoring of breathing behavior and application to diagnosis of sleep apnea, IEEE Trans. Biomed. Eng. 61 (2) (2014) 396–404. [50] F. Deng, J. Dong, X. Wang, Y. Fang, Y. Liu, Z. Yu, J. Liu, F. Chen, Design and implementation of a noncontact sleep monitoring system using infrared cameras and motion sensor, IEEE Trans. Instrum. Meas. 67 (7) (2018) 1555–1563. [51] G. Ciocca, P. Napoletano, R. Schettini, Food recognition: a new dataset, experiments, and results, IEEE J. Biomed. Health Inf. 21 (3) (2017) 588–598. [52] S. Mezgec, B.K. Seljak, Nutrinet: a deep learning food and drink image recognition system for dietary assessment, Nutrients 9 (7) (2017) 657. [53] E.A.A. Hippocrate, H. Suwa, Y. Arakawa, K. Yasumoto, Food weight estimation using smartphone and cutlery, in: Work. on IoT-enabled Healthcare and Well. Tech. and Sys, 2016, pp. 9–14. [54] Y. He, C. Xu, N. Khanna, C.J. Boushey, E.J. Delp, Food image analysis: Segmentation, identification and weight estimation, in: Multimedia and Expo, 2013, pp. 1–6. [55] A.L.S. Kawamoto, F.S.C.d. Silva, Depth-sensor applications for the elderly: a viable option to promote a better quality of life, IEEE Consum. Electron. Mag. 7 (1) (2018) 47–56.

[56] P. Wang, W. Li, P.O. Ogunbona, J. Wan, S. Escalera, RGB-D-based motion recognition with deep learning: a survey, Int J. Comp. Vis. (2018). (forthcoming) [57] D.B. Yaden, J.C. Eichstaedt, J.D. Medaglia, The future of technology in positive psychology: methodological advances in the science of well-being, Front. in Psychol. 9 (2018) 962. [58] T.D. Parsons, A. Gaggioli, G. Riva, Virtual reality for research in social neuroscience, Brain Sci. 7 (4) (2017) 42. [59] D. Freeman, S. Reeve, A. Robinson, A. Ehlers, D. Clark, B. Spanlang, M. Slater, Virtual reality in the assessment, understanding, and treatment of mental health disorders, Psychol. Med. 47 (14) (2017) 2393–2400. [60] S.S. Rautaray, A. Agrawal, Vision based hand gesture recognition for human computer interaction: a survey, Artif. Intell. Rev. 43 (1) (2015) 1–54. [61] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, B. MacIntyre, Recent advances in augmented reality, IEEE Comput. Graph. Appl. 21 (6) (2001) 34–47. [62] A. Dey, M. Billinghurst, R.W. Lindeman, J. Swan, A systematic review of 10 years of augmented reality usability studies: 2005 to 2014, Front. Rob. AI 5 (2018) 37. [63] Z. Noh, M.S. Sunar, Z. Pan, A review on augmented reality for virtual heritage system, in: Int. Conf. on Tech. for E-Learning and Dig. Enter., 2009, pp. 50–61. [64] B.H. Thomas, A survey of visual, mixed, and augmented reality gaming, Comput. Entertainment (CIE) 10 (1) (2012) 3. [65] H.K. Wu, S.W.Y. Lee, H.Y. Chang, J.C. Liang, Current status, opportunities and challenges of augmented reality in education, Comput. Educ. 62 (2013) 41–49. [66] H.-n. Yoo, E. Chung, B.-H. Lee, The effects of augmented reality-based Otago exercise on balance, gait, and falls efficacy of elderly women, J. Phys. Ther. Sci. 25 (7) (2013) 797–801. [67] A.C. McLaughlin, L.A. Matalenas, M.G. Coleman, Design of human centered augmented reality for managing chronic health conditions, in: Aging, Technology and Health, Elsevier, 2018, pp. 261–296. [68] A. Mihailidis, J.N. Boger, T. Craig, J. Hoey, The coach prompting system to assist older adults with dementia through handwashing: an efficacy study, BMC Geriatrics 8 (1) (2008). [69] A. Malhotra, J. Hoey, A. Konig, S. van Vuuren, A study of elderly people’s emotional understanding of prompts given by virtual humans, in: Int. Conf. on Pervasive Comp. Tech. for Healthcare, 2016, pp. 13–16. [70] L. Lin, S. Czarnuch, A. Malhotra, L. Yu, T. Schroeder, J. Hoey, Affectively aligned cognitive assistance using Bayesian affect control theory, in: Int. Work. on Ambient Assisted Living, 2014, pp. 279–287. [71] J. Hoey, T. Schroder, A. Alhothali, Bayesian affect control theory, in: Int. Conf. Affective Comp. and Intell. Interaction, 2013, pp. 166–172. [72] T. Kanade, M. Hebert, First-person vision, Proc. IEEE 100 (8) (2012) 2442–2453. [73] M. Leo, G.M. Farinella, Computer Vision for Assistive Healthcare, Academic Press, 2018. [74] D. Damen, T. Leelasawassuk, W. Mayol-Cuevas, You-do, i-learn: egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance, Comput. Vis. ImageUnderstanding 149 (2016) 98–112. [75] S. Alletto, G. Serra, S. Calderara, R. Cucchiara, Understanding social relationships in egocentric vision, Pattern Recognit. 48 (12) (2015) 4082–4096. [76] B. Soran, A. Farhadi, L. Shapiro, Generating notifications for missing actions: don’t forget to turn the lights off!, in: Proc. of the IEEE Int. Conf. on Comp. Vision, 2015, pp. 4669–4677. [77] K. Zhan, S. Faux, F. Ramos, Multi-scale conditional random fields for first-person activity recognition on elders and disabled patients, Pervasive Mob. Comput. 16 (2015) 251–267. [78] K.E. Asnaoui, A. Hamid, A. Brahim, O. Mohammed, A survey of activity recognition in egocentric lifelogging datasets, in: Int. Conf. on Wireless Tech., Embedded and Intelligent Systems, 2017, pp. 1–8. [79] D. Damen, H. Doughty, G.M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, M. Wray, Scaling egocentric vision: the epic-kitchens dataset, Eur. Conf. Comput. Vis. (2018) 720–736. [80] M.B. nos, M. Dimiccoli, P. Radeva, Toward storytelling from visual lifelogging: an overview, IEEE Trans. Hum.-Mach. Syst. 47 (1) (2017) 77–90. [81] A. Furnari, G.M. Farinella, S. Battiato, Recognizing personal locations from egocentric videos, IEEE Trans. Hum.-Mach. Syst. 47 (1) (2017) 6–18. [82] G. Vaca-Castano, S. Das, J.P. Sousa, N.D. Lobo, M. Shah, Improved scene identification and object detection on egocentric vision of daily activities, Comput. Vis. Image Understanding 156 (2017) 92–103. [83] A. Betancourt, N. Díaz-Rodríguez, E. Barakova, L. Marcenaro, M. Rauterberg, C. Regazzoni, Unsupervised understanding of location and illumination changes in egocentric videos, Pervasive Mob. Comput. 40 (2017) 414–429. [84] G. Abebe, A. Cavallaro, X. Parra, Robust multi-dimensional motion features for first-person vision activity recognition, Comput. Vis. Image Understanding 149 (2016) 229–248. [85] H. Pirsiavash, D. Ramanan, Detecting activities of daily living in first-person camera views, in: 2012 IEEE Conf. on Comp. Vision and Pattern Recognition, 2012, pp. 2847–2854. [86] A. Furnari, S. Battiato, K. Grauman, G.M. Farinella, Next-active-object prediction from egocentric videos, J. Visual Commun. Image Represent. 49 (2017) 401–411. [87] R.W. Picard, et al., Affective Computing, Media Lab MIT, 1995. [88] Z. Zeng, M. Pantic, G.I. Roisman, T.S. Huang, A survey of affect recognition methods: audio, visual, and spontaneous expressions, IEEE Trans. Pattern Anal. Mach. Intell. 31 (1) (2009) 39–58.

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx [89] R.A. Calvo, S. D’Mello, Affect detection: an interdisciplinary review of models, methods, and their applications, IEEE Trans. Affective Comput. 1 (1) (2010) 18–37. [90] S. Poria, E. Cambria, R. Bajpai, A. Hussain, A review of affective computing: from unimodal analysis to multimodal fusion, Inf. Fusion 37 (2017) 98–125. [91] M.E. Ayadi, M.S. Kamel, F. Karray, Survey on speech emotion recognition: features, classification schemes, and databases, Pattern Recognit. 44 (3) (2011) 572–587. [92] B.C. Ko, A brief review of facial emotion recognition based on visual information, Sensors 18 (2) (2018) 401. [93] B. Martinez, M.F. Valstar, B. Jiang, M. Pantic, Automatic analysis of facial actions: a survey, IEEE Trans. Affective Comput. (2018). (forthcoming) [94] M. Karg, A.A. Samadani, R. Gorbet, K. Kühnlenz, J. Hoey, D. Kulic´ , Body movements for affective expression: a survey of automatic recognition and generation, IEEE Trans. Affective Comput. 4 (4) (2013) 341–359. [95] A. Kleinsmith, N. Bianchi-Berthouze, Affective body expression perception and recognition: a survey, IEEE Trans. Affective Comput. 4 (1) (2013) 15–33. [96] H. Zacharatos, C. Gatzoulis, Y.L. Chrysanthou, Automatic emotion recognition based on body movement analysis: a survey, IEEE Comput. Graph. Appl. 34 (6) (2014) 35–45. [97] D. Novak, M. Mihelj, M. Munih, A survey of methods for data fusion and system adaptation using autonomic nervous system responses in physiological computing, Interact. Comput. 24 (3) (2012) 154–172. [98] G. Boccignone, D. Conte, V. Cuculo, A. D’Amelio, G. Grossi, R. Lanzarotti, Deep construction of an affective latent space via multimodal enactment, IEEE Trans. Cognit. Dev. Syst. (2018). (forthcoming) [99] G. Boccignone, D. Conte, V. Cuculo, R. Lanzarotti, Amhuse: a multimodal dataset for humour sensing, in: Proc. of the 19th ACM Int. Conf. on Multimodal Interaction, 2017, pp. 438–445. [100] H.A. Osman, T.H. Falk, Multimodal Affect Recognition: Current Approaches and Challenges, in: Emotion and Attention Recognition Based on Biological Signals and Images, InTech, 2017, pp. 59–86. [101] G. Guo, R. Guo, X. Li, Facial expression recognition influenced by human aging, IEEE Trans. Affective Comput. 4 (3) (2013) 291–298. [102] M. Fölster, U. Hess, K. Werheid, Facial age affects emotional expression decoding, Front. Psychol. 5 (2014) 30. [103] N. Rodrigues, A. Pereira, A user-centred well-being home for the elderly, Appl. Sci. 8 (6) (2018) 850. [104] S. Anderson, T. White-Schwoch, A. Parbery-Clark, N. Kraus, A dynamic auditory-cognitive system supports speech-in-noise perception in older adults, Hearing Res. 300 (2013) 18–32. [105] S. Chatterji, J. Byles, D. Cutler, T. Seeman, E. Verdes, Health, functioning, and disability in older adults-present status and future implications, Lancet 385 (9967) (2015) 563–575. [106] A. Kuijsters, J. Redi, B. de Ruyter, I. Heynderickx, Lighting to make you feel better: improving the mood of elderly people with affective ambiences, PLoS ONE 10 (7) (2015) e0132732. [107] K. Smitha, N. Robinson, A. Vinod, A study on the effect of emotion evoking videos on physiological signals, in: Int. Conf. Devices, Circuits and Systems, 2014, pp. 1–5. [108] D. Hazer, X. Ma, S. Rukavina, S. Gruss, S. Walter, H.C. Traue, Emotion elicitation using film clips: effect of age groups on movie choice and emotion rating, in: Int. Conf. Human-Comp Inter, 2015, pp. 110–116. [109] R.M. Baños, E. Etchemendy, D. Castilla, A. Garcia-Palacios, S. Quero, C. Botella, Positive mood induction procedures for virtual environments designed for elderly people, Interact. Comput. 24 (3) (2012) 131–138. [110] N. Mammarella, B. Fairfield, C. Cornoldi, Does music enhance cognitive performance in healthy older adults? The Vivaldi effect, Aging Clin. Exp. Res. 19 (5) (2007) 394–399. [111] J. Anttonen, V. Surakka, Music, heart rate, and emotions in the context of stimulating technologies, in: Int. Conf. on Affective Computing and Intelligent Interaction, 2007, pp. 290–301. [112] J.C. Castillo, A. Castro-González, A. Fernández-Caballero, J.M. Latorre, J.M. Pastor, A. Fernández-Sotos, M.A. Salichs, Software architecture for smart emotion recognition and regulation of the ageing adult, Cognit. Comput. 8 (2) (2016) 357–367. [113] X. Zhang, H.W. Yu, L.F. Barrett, How does this make you feel? A comparison of four affect induction procedures, Front. Psychol. 5 (2014) 689. [114] C. Jallais, A.L. Gilet, Inducing changes in arousal and valence: comparison of two mood induction procedures, Behav. Res. Methods 42 (1) (2010) 318–325. [115] A. Fernández-Caballero, A. Martínez-Rodrigo, J.M. Pastor, J.C. Castillo, E. Lozano-Monasor, M.T. López, R. Zangróniz, J.M. Latorre, A. Fernández-Sotos, Smart environment architecture for emotion detection and regulation, J. Biomed. Inform. 64 (2016) 55–73. [116] L.Y. Mano, B.S. Faiçal, L.H. Nakamura, P.H. Gomes, G.L. Libralon, R.I. Meneguete, G.P. Filho, G.T. Giancristofaro, G. Pessin, B. Krishnamachari, J. Ueyama, Exploiting iot technologies for enhancing health smart homes through patient identification and emotion recognition, Comput. Commun. 89 (2016) 178–190. [117] G. Muhammad, M. Alsulaiman, S.U. Amin, A. Ghoneim, M.F. Alhamid, A facial– expression monitoring system for improved healthcare in smart cities, IEEE Access 5 (2017) 10871–10881. [118] M.J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, J. Budynek, The japanese female facial expression (JAFFE) database, in: Int. Conf. on Automatic Face and Gesture Recognition, 1998, pp. 14–16. [119] T. Kanade, J.F. Cohn, Y. Tian, Comprehensive database for facial expression

[120]

[121]

[122]

[123] [124]

[125]

[126]

[127] [128]

[129]

[130]

[131]

[132]

[133] [134] [135]

[136] [137]

[138] [139]

[140]

[141] [142]

[143]

[144]

[145] [146] [147] [148]

[149]

9

analysis, in: IEEE Int. Conf. Automatic Face and Gesture Recognition, 20 0 0, pp. 46–53. E. Lozano-Monasor, M.T. López, F. Vigo-Bustos, A. Fernández-Caballero, Facial expression recognition in ageing adults: from lab to ambient assisted living, J. Ambient Intell. Hum. Comput. 8 (4) (2017) 567–578. Y. Yaddaden, A. Bouzouane, M. Adda, B. Bouchard, A new approach of facial expression recognition for ambient assisted living, in: Int. Conf. on Pers. Tech. Rel. to Ass. Env., 2016, p. 14. S. Tivatansakul, M. Ohkura, S. Puangpontip, T. Achalakul, Emotional Healthcare System: Emotion Detection by Facial Expressions Using Japanese Database, in: Comp. Sci. and Elect. Eng. Conf., 2014, pp. 41–46. M. Alhussein, Automatic facial emotion recognition using weber local descriptor for e-healthcare system, Cluster Comput. 19 (1) (2016) 99–108. M.Z. Uddin, M.M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, G. Fortino, Facial expression recognition utilizing local direction-based robust features and deep belief network, IEEE Access 5 (2017) 4525–4536. A. Caroppo, A. Leone, P. Siciliano, Facial expression recognition in older adults using deep machine learning, in: Proc. Third Ital. Work. Artif. Intell. Ambient Assist. Living, 2017, pp. 30–43. N.C. Ebner, M. Riediger, U. Lindenberger, Faces—a database of facial expressions in young, middle-aged, and older women and men: development and validation, Behav. Res. Meth. 42 (1) (2010) 351–362. M. Minear, D.C. Park, A lifespan database of adult facial stimuli, Behav. Res. Methods Instrum. Comput. 36 (4) (2004) 630–633. C. Katsimerou, I. Heynderickx, J.A. Redi, Predicting mood from punctual emotion annotations on videos, IEEE Trans. Affective Comput. 6 (2) (2015) 179–192. A. Tapus, A. Bandera, R. Vazquez-Martin, L.V. Calderita, Perceiving the person and their interactions with the others for social robotics–a review, Pattern Recognit. Lett. (2018). (forthcoming) M. Levasseur, L. Richard, L. Gauvin, E. Raymond, Inventory and analysis of definitions of social participation found in the aging literature: proposed taxonomy of social activities, Soc. Sci. Med. 71 (12) (2010) 2141–2149. S. Bano, J. Zhang, S.J. McKenna, Finding time together: detection and classification of focused interaction in egocentric video, in: Comp. Vision Workshop, IEEE Int Conf on, 2017, pp. 2322–2330. J. Berclaz, F. Fleuret, E. Türetken, P. Fua, Multiple object tracking using k-shortest paths optimization, IEEE Trans. Pattern Anal. Mach. Intell. 33 (9) (2011) 1806–1819. S. Pellegrini, A. Ess, K. Schindler, L.J.V. Gool, You’ll never walk alone: Modeling social behavior for multi-target tracking, in: IEEE ICCV, 2009, pp. 261–268. W. Choi, S. Savarese, A unified framework for multi-target tracking and collective activity recognition, in: Europ. Conf. on Comp. Vision, 2012, pp. 215–230. N. Noceti, F. Odone, Humans in groups: the importance of contextual information for understanding collective activities, Pattern Recognit. 47 (11) (2014) 3535–3551. A. Fathi, J.K. Hodgins, J.M. Rehg, Social interactions: A first-person perspective, in: Comp. Vision and Patt. Rec., 2012, pp. 1226–1233. T. Lan, Y. Wang, W. Yang, G. Mori, Beyond actions: discriminative models for contextual group activities, in: Advances in Neural Information Processing systems, 2010, pp. 1216–1224. B. Ni, S. Yan, A. Kassim, Recognizing human group activities with localized causalities, in: IEEE CVPR, 2009, pp. 1470–1477. X. Alameda-Pineda, J. Staiano, R. Subramanian, L. Batrinca, E. Ricci, B. Lepri, O. Lanz, N. Sebe, SALSA: a novel dataset for multimodal group behavior analysis, IEEE Trans. Pattern Anal. Mach. Intell. 38 (8) (2016) 1707–1720. J. Varadarajan, R. Subramanian, S.R. Bulò, N. Ahuja, O. Lanz, E. Ricci, Joint estimation of human pose and conversational groups from social scenes, Int. J. Comput. Vis. 126 (2–4) (2018) 410–429. M.J. Marín-Jiménez, A. Zisserman, M. Eichner, V. Ferrari, Detecting people looking at each other in videos, Int. J. Comput. Vis. 106 (3) (2014) 282–296. A. Patron-Perez, M. Marszalek, I. Reid, A. Zisserman, Structured learning of human interactions in tv shows, IEEE Trans. Pattern Anal. Mach. Intell. 34 (12) (2012) 2441–2453. R. Trabelsi, J. Varadarajan, Y. Pei, L. Zhang, I. Jabri, A. Bouallegue, P. Moulin, Robust multi-modal cues for dyadic human interaction recognition, in: Proc. of the Workshop on Multimodal Understanding of Social, Affective and Subjective Attributes, 2017, pp. 47–53. N.M. Oliver, B. Rosario, A.P. Pentland, A Bayesian computer vision system for modeling human interactions, IEEE Trans. Pattern Anal. Mach. Intell. 22 (8) (20 0 0) 831–843. W. Choi, Y.-W. Chao, C. Pantofaru, S. Savarese, Discovering groups of people in images, in: Eur. Conf. on Comp. Vision, 2014, pp. 417–433. M. Eichner, V. Ferrari, We are family: Joint pose estimation of multiple persons, in: Comp. Vision – ECCV 2010, 2010, pp. 228–242. A. Kendon, Conducting Interaction: Patterns of Behavior in Focused Encounters, Cambridge University Press, 1990. F. Setti, C. Russell, C. Bassetti, M. Cristani, F-formation detection: individuating free-standing conversational groups in images, PLoS ONE 10 (5) (2015) e0123783. M. Cristani, L. Bazzani, G. Paggetti, A. Fossati, D. Tosato, A.D. Bue, G. Menegaz, V. Murino, Social interaction discovery by statistical analysis of f-formations, in: BMVC, vol. 2, 2011, p. 4.

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016

JID: PATREC 10

ARTICLE IN PRESS

[m5G;March 23, 2019;16:40]

G. Grossi, R. Lanzarotti and P. Napoletano et al. / Pattern Recognition Letters xxx (xxxx) xxx

[150] E. Ricci, J. Varadarajan, R. Subramanian, S.R. Bulo, N. Ahuja, O. Lanz, Uncovering Interactions and Interactors: Joint Estimation of Head, Body Orientation and F-formations from Surveillance Videos, in: IEEE ICCV, 2015, pp. 4660–4668. [151] P. Dai, H. Di, L. Dong, L. Tao, G. Xu, Group interaction analysis in dynamic context, IEEE Trans. Syst. Man Cyb. Part B (Cybernetics) 38 (1) (2008) 275–282. [152] D. Zhang, D. Gatica-Perez, S. Bengio, I. McCowan, Modeling individual and group actions in meetings with layered hmms, IEEE Trans. Multimed. 8 (3) (2006) 509–520. [153] S.O. Ba, J.M. Odobez, Multiperson visual focus of attention from head pose and meeting contextual cues, IEEE Trans. Pattern Anal. Mach. Intell. 33 (1) (2011) 101–116. [154] Y. Kong, Y. Jia, Y. Fu, Interactive phrases: semantic descriptionsfor human interaction recognition, IEEE Trans. Pattern Anal. Mach. Intell. 36 (9) (2014) 1775–1788. [155] A. Vahdat, B. Gao, M. Ranjbar, G. Mori, A discriminative key pose sequence model for recognizing human interactions, in: IEEE ICCV, 2011, pp. 1729–1736. [156] Q. Ke, M. Bennamoun, S. An, F. Boussaid, F. Sohel, Human interaction prediction using deep temporal features, in: Europ. Conf. on Comp. Vision, 2016, pp. 403–414. [157] L. Ding, A. Yilmaz, Inferring social relations from visual concepts, in: IEEE ICCV, 2011, pp. 699–706. [158] V. Ramanathan, B. Yao, L. Fei-Fei, Social role discovery in human events, in: IEEE CVPR, 2013, pp. 2475–2482. [159] D.J. Kiesler, The 1982 interpersonal circle: a taxonomy for complementarity in human transactions, Psychol. Rev. 90 (3) (1983) 185.

[160] Z. Zhang, P. Luo, C.C. Loy, X. Tang, From facial expression recognition to interpersonal relation prediction, Int. J. Comput. Vis. 126 (5) (2018) 550–569. [161] C.-Y. Weng, W.-T. Chu, J.-L. Wu, Rolenet: movie analysis from the perspective of social networks, IEEE Trans. Multmed. 11 (2) (2009) 256–271. [162] L. Ding, A. Yilmaz, Learning relations among movie characters: a social network perspective, in: Comp. Vision - ECCV, 2010, pp. 410–423. [163] T. Lan, L. Sigal, G. Mori, Social roles in hierarchical models for human activity recognition, in: IEEE CVPR, 2012, pp. 1354–1361. [164] V. Ramanathan, B. Yao, L. Fei-Fei, Social role recognition for human event understanding, in: Human-Cent. Soc. Med. Anal., Springer, 2014, pp. 75–93. [165] Y.H. Wu, C. Fassert, A.S. Rigaud, Designing robots for the elderly: appearance issue and beyond, Arch. Gerontol Geriatr 54 (1) (2012) 121–126. [166] J. Fasola, M.J. Mataric, Using socially assistive human–robot interaction to motivate physical exercise for older adults, Proc. IEEE 100 (8) (2012) 2512–2526. [167] K. Wada, T. Shibata, T. Saito, K. Tanie, Analysis of factors that bring mental effects to elderly people in robot assisted activity, in: Int. Conf. Intelligent Robots and Systems, vol. 2, 2002, pp. 1152–1157. [168] C.D. Kidd, W. Taggart, S. Turkle, A sociable robot to encourage social interaction among the elderly, in: IEEE Int. Rob. and Aut., 2006, pp. 3972–3976. [169] C.X. Lin, C. Lee, D. Lally, J.F. Coughlin, Impact of virtual reality (vr) experience on older adults’ well-being, in: Human Asp. of IT for the Aged Pop. App. in Heal., Ass., and Entert., 2018, pp. 89–100. [170] E. Thelisson, K. Sharma, H. Salam, V. Dignum, The general data protection regulation: an opportunity for the HCI community? in: Int. Conf. on Hum. Fact. in Comp. Sys., 2018, p. W36.

Please cite this article as: G. Grossi, R. Lanzarotti and P. Napoletano et al., Positive technology for elderly well-being: A review, Pattern Recognition Letters, https://doi.org/10.1016/j.patrec.2019.03.016