Shopping with a robotic companion

Shopping with a robotic companion

Computers in Human Behavior xxx (2017) 1e14 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com...

3MB Sizes 0 Downloads 62 Views

Computers in Human Behavior xxx (2017) 1e14

Contents lists available at ScienceDirect

Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Full length article

Shopping with a robotic companion Francesca Bertacchini a, Eleonora Bilotta b, *, Pietro Pantano b a b

 della Calabria, 87036, Arcavacata di Rende, CS, Italy Dipartimento di Ingegneria Meccanica, Energetica e Gestionale - DIMEG, Cubo 46C, Universita  della Calabria, 87036, Arcavacata di Rende, CS, Italy Dipartimento di Fisica, Cubo 17B, Universita

a r t i c l e i n f o

a b s t r a c t

Article history: Received 10 December 2016 Received in revised form 12 February 2017 Accepted 27 February 2017 Available online xxx

In this paper, we present a robotic shopping assistant, designed with a cognitive architecture, grounded in machine learning systems, in order to study how the human-robot interaction (HRI) is changing the shopping behavior in smart technological stores. In the software environment of the NAO robot, connected to the Internet with cloud services, we designed a social-like interaction where the robot carries out actions with the customer. In particular, we focused our design on two main skills the robot has to learn: the first is the ability to acquire social input communicated by relevant clues that humans provide about their emotional state (emotions, emotional speech), or collected in the Social Media (such as, information on the customer's tastes, cultural background, etc.). The second is the skill to express in turn its own emotional state, so that it can affect the customer buying decision, refining in the user the sense of interacting with a human-like companion. By combining social robotics and machine learning systems the potential of robotics to assist people in real life situations will increase, providing a gentle customers' acceptance of advanced technologies. © 2017 Elsevier Ltd. All rights reserved.

Keywords: Social robotics Human Robot Interaction (HRI) Emotion and Gesture Recognition Machine learning Smart retail settings

1. Introduction Shopping centers are becoming smart improving customers satisfaction with tangible services, reliability, responding promptly to the customers' needs, assuring the finest employees courtesy and trustfulness, providing a sense of empathy, in order to compete with the today advanced Internet of Things (I0T) platforms (Balaji & Roy, 2016). Not only they are equipped with the most recent technologies, with the fastest fiber internet connections, the facilities of cloud computing and analytic methods, but they also have autonomous robots performing the tasks of the sales assistants. All around the world, robots are entering the shopping centers, attesting that robotic technology is gaining ground. The rationale for this choice lies in the fact that retail shops need to offer trendy yet low-cost substitutes to e-commerce, reducing the operating costs of personnel management (Francis et al., 2013). A robotic alternative can automate the logistics of the retail operation, both at the front-end and back-end operations, thereby avoiding the out-of-stocks (Che, Chen, & Chen, 2012) and the related lack of profits, defining a set of activities that

* Corresponding author. E-mail addresses: [email protected] (F. Bertacchini), eleonora. [email protected] (E. Bilotta), [email protected] (P. Pantano).

allow various high-level tasks to be achieved successfully. As competing with the innovations introduced by the e-Commerce, physical stores may have lower costs, and of course lower profits (Grewal, Roggeveen, & Nordf€ alt, 2016; Guo & Hu, 2014). They must have trustworthy services that manage the shop, reducing the personnel costs. Other important aspects that retailers are facing are the challenge of having enough space to exhibit items from shelves, the manual allocation of goods and the real-time updating of the shelves. Therefore, researchers designed automated robot platforms that embed the above-mentioned functionality, to navigate autonomously the physical shops, integrating or partially substituting the staff tasks (Kumar, Anand, & Song, 2016). According to the International Federation of Robotics (IFR, 2016), a strong growth in the robot market is being achieved in the area of social services, with robot assistants, especially devoted to surveillance, monitoring, domestic use. The number of service robots rose considerably by 25% in 2015, compared to 2014. Service robots are used especially in supermarkets, but also in other exhibitions, in museums as guides or information providers, with edutainment aims in school settings (Bilotta et al., 2009; Bertacchini, Bilotta, Gabriele, Pantano, & Servidio, 2010; Gabriele, Tavernise, & Bertacchini, 2012). The service robotics sector is expected to grow in the medium term, both for professional and domestic use. (See the Executive Summary World Robotics 2016 - Service Robots). Robots and Robot-as-a Service

http://dx.doi.org/10.1016/j.chb.2017.02.064 0747-5632/© 2017 Elsevier Ltd. All rights reserved.

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

2

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

(RaaS) are being used for solving different problems in the retailing industry (Kumar, 2016). Many humanoid robots have been exhibited in different shops of the world, from Japan to France, to the USA, which in part are RaaS systems, devoted to the customer assistance. Technological enriched environments allow for a huge variety of robot services. Customer assistance makes an innovative use of the data stored in the shop digital warehouse, or in the cloud computing facilities. Users can access, visualize, share, and, finally, capture in real time the data (Kejriwal, Garg, & Kumar, 2015). Given these already existing functionalities, service robots could carry out activities such as the creation of a shopping cart or discovering where the customers' desired products are located in the shop. These tasks are of great help for customers, especially older ones. This digital enriched system could also be displayed for the user, as a digital ready to use virtual shop, in the physical store. Shopper's purchases in different stores are annotated and the system maps the path for the more common items, thus speeding up the time of purchasing. Personal engagement could be also useful. In this future of robotic services, especially in fashion stores, a shopping humanoid assistant could collect the physical information about what a customer likes or dislikes gathering data from the customer behavior (Pratiba, 2013), which will be stored as a repository of Big Data to segment the customer's preferences (Vojtovi c, Navickas, & Gruzauskas, 2016). Furthermore, users' physical attributes (height, hair and eye color, etc.) could be scanned, and the robot can then recommend items according to the users attribute, budget, previous shopping history and personal features. Other services are related to the paying activity, such as the currency exchange, the gift packaging, the price comparisons, and the dispensing of coupons. However, the great advantage of integrating robotic services in the retail sector has to be connected to the combined power of Robotics, Analytics, and Cloud (RAC), in this merging sector called clouds robotics (Proia, Simshaw, & Hauser, 2015) that connects robotics to the Internet for massively parallel computation and sharing of huge data resources. The advantages reside in three main points: a. The minimal upfront costs, as the robots are connected to a cloud server; b. Robots can be designed as part of an intelligent ambient system (Ultes, Dikme, & Minker, 2016), thus interacting with surveillance cameras, RFID antennae (Nur, Morenza-Cinos, Carreras, & Pous, 2015; Zhang, Lyu, Roppel, Patton, & Senthilkumar, 2016), or with all the items that have a sensor (as in the Internet of Intelligent Things IOIT paradigm); c. Robots that use cloud computing and a reduced number of employees can eliminate overheads and related costs. By making robots smart and endowing them with robust computational skills, cloud robotics could be the promoter for the increasing of the consumer robotics marketplace. In this way, retailing could exploit technological changes in the business, improving the customers -robot interaction. However, what will be the future of Robotics in retailing? How robots will change their behavior by interacting with people? Since the first, simplified humanoid robot, built at the Waseda University, Tokyo, Japan, in 1972, research has produced advanced, skilled robots. They control motor behavior and stability, have intelligent behavior that allows carrying out tasks in a human-like fashion, with social communication skills (Breazeal & Scassellati, 2002) and emotions (Picard & Picard, 1997; Fellous & Arbib, 2005). In fact, researchers are developing more socially competent robots that are able to collaborate with people, learning by interacting with other humans as infants do during their development (Merrick, 2017; Min, Luo, Zhu, & Bi, 2016). This approach

requires that robots learn to interact with people in carrying out the tasks (Lehmann et al., 2013). So, what will happen in the near future in situations where humans and robots that share the same goals will collaborate in shops? Will be possible to create patterns of social and emotional behavior that will be useful for fulfilling the customers’ needs? Until now, retail robots have been designed to do something for the humans. However, if the aim is to create a more human-like social environment, the design of shop assistant robots has to connect in a very special way humans and robots, carrying out the purchase behavior together. As Takayama, Ju, and Nass (2008) found “people would feel more positively toward robots doing occupations with people rather than in place of people’’. Transposing this approach to market for the shop use, we developed a robot to help the consumer to select an item, considered one of the most important activities in the retail sector. According to our knowledge, despite several robotic applications have been created to provide technologically enhanced tools to customers, the development of a social interaction between robots and the customer has not yet been realized. In this paper, we present a Human Robot Interaction (HRI) architecture based on the development of empathy and friendship, thus developing a sense of satisfaction in the user and real social life. To develop such a robotic assistant, a cognitive and social task analysis is carried out in this paper. This will allow: (i) to understand the social interaction between a robotic assistant and a customer; (ii) to analyze the verbal and nonverbal communication and the expression of the emotions in both the actors of this interaction; (iii) to define the specific cognitive architecture to be developed, thus improving the potentials of robots to assist people in real life situations, (iv) to create a machine learning system, with many specific modules devoted to nonverbal behavior, such as hand and body gestures and a sentiment analysis for language, in order to correlate emotions and emotional language. Given the previous theoretical and practical premises, the main aim of this paper is to present a prototypical robotic application in the retailing sector, with the specific objectives to help consumers to carry out the usual tasks in the shops. The main theoretical contribution is the design of an autonomous robot, endowed with advanced Artificial Intelligent and machine learning programs, tailored for a practical implementation with the NAO1 robot. The paper is organized as follows. After the Introduction, the current literature on the impact of robots in the retail setting is presented in Section 2. Sections 3 and 4 explain some models of social learning and the basic machine learning concepts. The NAO robot, used for this implementation, with its hardware and software architecture, is given in Section 5. Section 6 illustrates the cognitive architecture, realized as a coordinated machine learning system, providing information on the task realized by both customer and robot in the shopping environment. Empirical results are reported in Section 7. A discussion on the complexity of the technological and management scenario in the retail sector is presented in Section 8. Finally, some considerations and future developments on the acceptance of new technologies in the retail close work.

1 The name NAO comes from the NAOqi Framework, which is the operating system used to program the robot.

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

2. Literature review 2.1. Robots capabilities at the back-end of a service setting Robotic innovation is changing the structure of the customer interactions within shops. Some applications are already flooding the industry, making possible advances that were thought science fiction until recently. At the Lowe's Innovation Lab (http://www. lowesinnovationlabs.com), for example, when a customer presents an item to buy, a robot scans it and then it will convoy the buyer to the exact place of the store where the item is displayed. New shopping experiences are offered at Hointer (http://www. hointer.com/), where the shopping becomes digital. There you will find the perfect gift, joining the magic of shopping with technological innovation. The pants are even brought into the dressing room by a robot, which has chosen for you the right size. In the coming years, when customers enter the store, they will be greeted by robotic salespersons, which might have the same skills and knowledge on the location of items in the store than those of conventional assistants. However, above all, the robots will be able to interpret the gestures, emotions, to talk and recommend products available in the store. From the above, it is clear that the impact of robots in both the front-end and the back-end operations will be very significant. If in the back-end, robots can replace staff, reducing costs, performing superbly typical warehouse functions, it is in the front-end that they can be really useful, taking care of customers. Robots will be able to assist the customer, carrying the shopping cart, cruising the aisles, looking for the right products. This task is very important, especially when it is considered the aging of the population and their need to find the necessary products to satisfy their needs. Robots could also provide a self-check-out, while the customer fills the cart, or when the items are removed, or offer the right advice in order to find specific products for their dietary regime. These collaborative robots are able to be helpful when necessary, but also to entertain as companions. 2.2. Robots capabilities at the front-end of a service setting The idea that is gaining momentum is to know in advance the customers' needs, getting information on them and their behavior. Some authors have addressed this problem also by the realization of technological apparatuses. For example, one of these application gets information from sensors embodied inside the floor tiles/carpet for tracking the customer's movement in the store (Elrod & Shrader, 2005), and by a system which can communicate information to physical autonomous robots (Heckel, & Schwartz, 2008). This organization creates a network of distributed sensors, which in turn activates while the customer enters the store. Other robotic technologies have been applied in a virtual shop environment, where robots illustrate the quality of specific items to the customers, by getting information on them by a distributed sensor network, discerning their behavior. Results put in evidence that by interacting with the robots, users stayed much more in front of the shelves, were positively influenced to carry out the purchasing behaviors, approving the visit of the shelves of recommended items (Kamei et al., 2012). Another relevant problem is the updating of the items on the shelves, in order to have a detailed product-level maps of the store layout (Mankodiya et al., 2013), to know exactly their position and to get a digital inventory of all items. To address this problem, a mobile inventory robot, which navigates the store, identifies all items, giving them a bar code has been invented by Zimmerman (2010). The robot creates also an inventory map, getting images of the shelves, decoding a product barcode from the shelf image, thus creating a perfect knowledge of all items in the store. On this line of research, to save work force cost and to give

3

information on Out-of-Stock (OOS) situation, thus improving customer satisfaction and revenue, some authors created a 3D/2D virtual environment based setting. A mobile robot, endowed with on-board multimodal sensors, sends information in this environment, mirroring its current position in the physical world, monitoring the furniture of the shelves, automatically collecting data and surveying the lack of items in a retail store (Kumar et al., 2014; Lin et al., 2016). 2.3. Customers’ acceptance of service robots The problem of consumers' acceptance of service robots is part of the theoretical framework devised by Parasuraman (2000), who, over a decade ago, designed the “Technology Readiness Index (TRI), a 36-item scale to measure people's propensity to embrace and use cutting-edge technologies”. The theoretical framework of the customers' acceptance of robotic systems can be traced in an extensive qualitative research on the reactions of people to the technology realized by Mick and Fournier (1998). These authors identified eight technological paradoxes (Engaging/Disengaging, Assimilation/Isolation, Fulfils/Creates Needs, Efficiency/Inefficiency, Competence/Incompetence, New/Obsolete, Freedom/Enslavement, Control/Chaos), linked to both positive and negative customers' feelings that technology unleashes. Based on this research, and other experiments conducted on several technologies, Parasuraman (2000) argued that the prevalence of positive and/or negative feelings about technology, varies from person to person, and changes in behavior and acceptance are related to the propensity of people to embrace and utilize new technology or not. Other studies (e.g. Davis, Bagozzi, & Warshaw, 1989) also identified the specific consumer beliefs and motivations that may increase (for example, perceived ease of use, entertainment) or limit (for example, the perceived risk) the acceptance of new technologies. However, at the time of these studies, technology was still in its infancy. Therefore, a successive scale, the TRI 2.0, with a more compact number of items (16), demonstrated its validity as a tool for segmenting the customer's preferences and technology acceptance (Parasuraman & Colby, 2015). One of the key technology that in the future will have major implications for service providers, customers, and employees connects to robotic systems. The quality of the technologically mediated relationships has also been considered, questioning about the technological changes in the retailer-customer relationships (Kamei et al., 2010), due to the use of mobile robots in the stores (Kamei et al., 2012). In an online survey, (Keeling, Keeling, & McGoldrick, 2013), customers judged how they perceive a range of technologically mediated and face-toface retail relationships. Results evidenced that human-robot interaction is seen as less friendly and supportive, but more task oriented than the human-to-human equivalent. The acceptance of a robotic assistant is related to human-like appearance, aesthetics and behavior, emotional body language (Minato, Shimada, Ishiguro, & Itakura, 2004; McColl & Nejat, 2014), more than to the quality, or the perceived usefulness of the interaction. The lack of facial expressions creates an inhibition in the communication of emotions, thus giving to the human-robot interaction a flavor of uncanniness (Tinwell, Grimshaw, Nabi, & Williams, 2011). This has led researchers to design the most appropriate social behavior structures, also for human-robot interaction, so that it is acceptable from a social point of view, even in settings such as stores (Heger, Kampling, & Niehaves, 2016). 3. Robots that learn from consumers Technological devices in the shopping environment modify traditional activities in the retailing sector, as the customer can

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

4

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

interact with robot and smart objects, which in turn can possess their own data and experience and can actively cooperate with each other and with the customer. Interesting new perspective is emerging in the mobile retailing sector (Pantano & Priporas, 2016), in connection with the user purchasing activity. Mobile devices and robotics are replacing traditional ways of retailing and advertising, introducing robots capable of social interaction. The study of human-robot behavior (HRI) is becoming of fundamental importance (Baxter & Trafton, 2016; Dautenhahn, 2007a,b). In traditional perspective, robots served for helping people to carry out dangerous or routinely tasks. However, if the shift in the design of shop assistant robots is to carry out the purchase behavior with them, in a social interaction (Adams, Breazeal, Brooks, & Scassellati, 2000; Brooks, 1999), creating a social environment in which robots can learn could be a good choice. For endowing a robot with this high level of learning processes, different approaches have been used, to facilitate ease of use and practical application of these systems in real life situations (Breazeal & Scassellati, 2002). The shopping assistant robot needs to act autonomously and safely in natural workspaces with people. This means that instead of preprogramming its behavior, the robot is capable to evolve its behavior and to interact with a human-like fashion, under a variety of environmental conditions and for a wide range of tasks. In particular, we focused our design on two main skills the robot has to learn: the first is the ability to acquire social input communicated by relevant cues that humans provide about their emotional state. These data are fundamental for understanding the dynamics of any given shopping interaction. The second is the skill to express in turn its own emotional states, so that it can affect the dynamics of social interaction. A great variety of different approaches and theories have been developed in cognitive science and robotics that could help further understand the underlying mechanisms of consumerrobot social interactions and to enhance robot's capabilities to fit well with the consumers' needs. The first social robots, built by Walter (Holland, 1997) had lamps fixed to the robot's front and response behavior toward the light of the lamps. In this way, the two robots could interact in a social manner, even without communication or mutual recognition. The “collective” or “swarm” behavior embodied into simulated or physical ant-like robots (Beckers, Holland, & Deneuborg, 1996) used self-organization principles and mimicked the behavior of social insect society. However, the great revolution came with the definition given by Fong et al., (2003): “Social robots are embodied agents that are part of a heterogeneous group: a society of robots or humans. They are able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate with and learn from each other”. According to Fong and collaborators, a robot has human-like social skills when it interacts by expressing emotions, communicates by using verbal and non-verbal social cues such as a dialogue, gaze, gestures or other expressive items, has a definite personality and has the capability to learn or improve these social abilities. Starting from these works, the perspectives used by researchers in designing the set of skills that belong to social learning have in common two main features: (i) the focus is oriented on high level forms of social learning such as imitation (Schaal, 1999; Dautenhahn, 2002); (ii) social learning is designed as a cognitive equipment that exist in the brain of a robot, together with many other mechanisms, such as perception or language skills (Chella, Cossentino, Sabatucci, & Seidita, 2006). Therefore, social learning has been implemented with a set of

adaptive algorithms that allow a robot to learn socially from other robots, or with a model robot, containing the full repertoire of behavior the learning robot has to acquire by imitation. Other aspects involved in acquiring social skills (Dautenhahn, 2001) are related to joint attention and simulation theory (Demeris & Khadhouri, 2006), intentionality and correspondence problem (Breazeal, 2009; Nehaniv & Dautenhahn, 2007), the relationship between robots imitators and a model of robot behavior to be imitated (Otero, Schweigmann, & Solari, 2008), the imitation of agents with different personalities (Alissandrakis, Nehaniv, & Dautenhahn, 2007). However, still, it is not yet clear how social learning techniques can be acquired. The design and implementation of robots capable to interact with the shopping social environment and exploit the verbal and nonverbal consumer's social cues, realizing social learning, is still a challenging endeavor. To develop such a system we planned to use machine-learning systems. 4. Machine learning systems Thanks to the production of Big Data and the huge computational ability of modern technology, a new generation of artificial intelligent systems, which learn from the environment and from Humans, with the aim to adapt to different and changing environmental situations are called learning machines, one of the most compelling contemporary research sector (Kumaran, Hassabis, & McClelland, 2016). A wide range of integrated machine learning capabilities and related algorithms can be opportunely used, from highly automated functions based on specific methods and diagnostics (Keim et al., 2008; Marsland, 2015) to chaotic methods (Abdechiri, Faez, Amindavar, & Bilotta, 2016). These functions work on a variety of data types, including numerical, textual, sound, image, signals, and raw data. Each data type can be a single feature, a list of features, or an association of features. When one type of data is a list of features, all data must have the same dimensions. To analyze the complexity of the customer-robot Interaction in a shopping environment, visual and auditory data coming from the non-verbal (emotions expressed by face, gestures, expressed by body movements) and verbal (spoken language, sounds) behavior of the customer need to be collected. Usually, the cluster analysis system can retain a better understanding of these data collected in the environment by video-recording cameras. This technique, developed about 30 years ago, allows finding groups in scattered data in a systematic way, thanks to the use of computers. Currently, cluster analysis is used in many fields of application, from Artificial Intelligence to Pattern Recognition (Kooij, Englebienne, & Gavrila, 2016), from Ecology to Economy (Murray, 2016), from Geoscience to Marketing (Tkaczynski, 2017), Medical r Research, Politics, Psychometry, etc. 4.1. Formal aspects of cluster analysis and machine learning Cluster analysis is also called numerical taxonomy and automatic data classification. According to Kaufman and Rousseeuw (1990), “cluster analysis is an unsupervised learning technique used for classification of data. Data elements are partitioned into groups called clusters that represent proximate collections of data elements based on a distance or dissimilarity function. Identical element pairs have zero distance or dissimilarity, and all others have positive distance or dissimilarity. However, cluster analysis is an unsupervised learning system, so results could be good, but not enough. To get a clear evidence of the performance of the robot, we thought to develop a support vector machine (SVM), used for classification and regression analysis. The point of force is the training the system has to do to improve its performance. The

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

developed SVM algorithm creates a model that allocates new examples into one category or the other, thus realizing a probabilistic binary linear classification, given a partial set of data on the customers’ behavior, in a shopping environment, subdivided into four main categories. The data are about user data on emotions, gestural behavior, spoken language, spatial organization of the items in the store and their availability. This system represents the examples as points in space, in four separate categories, divided by a clear break that is as wide as possible. Giving new examples, the system then will map which side of the gap they fall on, and predict to what category each example belongs. Furthermore, SVMs can efficiently perform a non-linear classification using the kernel trick (Khan & Jain, 2016), mapping indirectly their inputs into high-dimensional feature spaces. When data are not labelled, supervised learning is not possible, and an unsupervised learning approach is required, attempting to find natural clustering of the data to groups, and then mapping new data to these shaped groups. The clustering algorithm that provides an improvement to the support vector machines is called support vector clustering (Saltos & Weber, 2016). Industrial applications use the system either when data are not labelled or when only some data are labelled as a pre-processing for a classification pass. This research sector is oriented along three main lines:  Task-Oriented Studies that deal with the development and analysis of learning systems to improve the computational performance of autonomous systems.  Cognitive Simulation Studies related to the simulation of the human learning processes.  Theoretical Analysis-the theoretical exploration of the space of possible learning methods and algorithms, independently of the application domain in which the system will be used (Michalski, 1983). To implement the cognitive architecture of the robotic social assistant, we based our design on mixed Machine Learning models approach, to provide social learning abilities to a humanoid robot interacting with consumers in a shop or other robots. By using some of the cited methods, basically developed in the Mathematica language (Wolfram, 2016), this paper aims to test and design a coordinated machine learning architecture for investigating social transferability of complex sensory-motor abilities, learning from humans (Billard, Calinon, & Dillmann, 2016), for improving the Customer-Robot Interaction. 5. The NAO humanoid robot Research and technological progresses have shaped Humanoid robots, particularly skilled on motion, manipulation, locomotion stability and other basic tasks (Fitzpatrick et al., 2016). Many improvements have been done on advanced materials, mechanical, electronic and computer technologies (Cahn & Lifshitz, 2016). However, nowadays, research on humanoid robots focuses on behavior, emotions and interaction with humans (Breazeal, 2003). In this regard, the design of humanoid robots requires the integration and coordination of diverse related areas as learning theory, genetic algorithms (Bilotta, Cutrí, & Pantano, 2006) or Cellular Neural Networks (Bilotta, Pantano, & Vena, 2016), Control Theory, Artificial Intelligence, Mechatronics and even Biomechanics and Computational Neuroscience (Breazeal, 2003; Eyssel, 2017). Developed by Aldebaran Robotics, 2012, with its 57 cm height and 4.5 kg weight, NAO is an autonomous humanoid robot (Fig. 1), fully articulated, easy to program and a low cost product. This robot is widely used as a social companion for students, as a learning tool in the classroom and for autistic children rehabilitation purposes

5

Fig. 1. Some characteristics of the NAO mechanical, sensorial and software architecture.

(Sharma, Khosla, & Rao, 2016; Huijnen, Lexis, Jansens, & Witte, 2016). Moreover, there are already successful applications of NAO €rer, Salah, & humanoid robots as a companion for elderly people (Go Akın, 2016). The android NAO can perform several complex actions like walking, kicking a ball, and standing up, picking up objects. The mechanical apparatus has inertial measurement, accelerometer, gyrometer and ultrasonic sensors, for providing stability and positioning in space. It is equipped with two 640  480 VGA cameras, microphones and loudspeakers, and infrared, force, contact and touch sensors. The NAO robot H25 (V4) brings a fully capable computer on-board and a powerful memory system, running an Embedded Linux distribution. A battery provides about 60 min of continuous operation; it communicates with remote computers via a wireless or a wired Ethernet link. The NAO H25 robot has 25 Degrees of Freedom (DoF); 2 in the head, 5 in each arm, 5 in each leg, 1 in each hand and 1 in the pelvis (two joints of the pelvis are coupled together in one servo mechanism, so it cannot move independently). Controllers and encoders monitor the position of the joints, while five kinematic chains link all joints. Attached to the NAO head, two frontal cameras, the top focusing on the front, the bottom focusing on the foot, provide a 640  480resolution vision, at 30 frames per second (see Fig. 1). A powerful programming environment, easy to use even by novice users, composed by the software Choregraphe (to create complex applications and fine control of motions), NAOqi (to program the execution of the system motion), and Monitor (to receive feedback from the robot and verify joints or sensor values), allows to easily program and implement the NAO behavior. NAOqi includes elements like parallel processing, resources management, synchronization, and event processing, generally required for robotic systems. Furthermore, NAO is an autonomous robot, designed to function remotely, by means of a wireless internet platform, and without the intervention of the human user. The NAO robot connects to the Internet platforms via the mobile communication network or WIFI, thus allowing teleoperation, which is essential especially when the robot is operating in a changing scenario, like that proposed in the present paper. Furthermore, NAO robot has a Face Recognition module that provides with the possibility to not only detect but also recognize people. The module analyses the basic features of a face and provides information about the position and shape of the face. It also generates 31 landmarks for a face representation, including the contour of the mouth, nose position, the shape of each eyebrow and eye contour. A learning stage is necessary to create the database. The database is stored in a specific directory on the robot, or in a cloud server. 2

2 For the technical aspects of the NAO Robot, please see the Aldebaran Robotics, “Nao software documentation.” 2012. [Online]. Retrieved at http://doc.aldebaran. com/1-14/index.html.

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

6

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

5.1. The cognitive architecture implemented as a machine learning coordinated system To endow the robot with an adaptive behavior, ensuring the efficacy of human-like performance in social communication, we designed and implemented a modular cognitive architecture, based on a cluster of machine learning algorithms (Kotsiantis, 2007). In this way, it is possible to have multiple processing layers for developing a deep learning (LeCun, Bengio, & Hinton, 2015) processing of multimodal data in the NAO's system, usually handled during human dyadic social interaction. These combined methods discover patterns in large data set, extracting relevant information, thus allowing the robot internal changes of parameters that serve to adapt the robot's behavior to the varying environmental circumstances, and/or to the customer's behavior. The robot tasks in the interaction regard four main behavioral functions: customer identification, customer's emotions and main gestures recognition and exhibition, speech sentiment analysis (Medhat, Hassan, & Korashy, 2014), items localization in the shop, advertise, offerings, linked to the cognitive architecture organization, and reported in Fig. 2. 1. The customer identification/recognition (if her information is stored in the shop's database), is a very important task from the HRI point of view. The robot is capable to detect exactly the user, and to collect a set of social and behavioral characteristics, analyzing the data from the social media, such as tastes,

preferences, favorite books and videos, cliques of friends, hobbies, major recreational and working activities, family composition, social status. Hence, this quest for information provides the robot the ability to communicate with the user as if the robot knew her, and somehow be able to target the consumer to certain products rather than others. 2. Facial emotion perception and recognition, together with nonverbal behavior and gestures in social interaction play an important role in computational intelligence of humanoid robots, since the first studies of cognitive robotics (Breazeal, 1998; Breazeal & Brooks, 2005). Facial expression recognition does not necessarily allow perceiving emotions in humanoid robots. However, it could be helpful to understand the opinions and attitudes of the others (Batty & Taylor, 2003), thus constituting a powerful tool that would greatly enhance the HRI (Dautenhahn, 2007a,b). The robot might base future interactions on the knowledge of what is going on in the customer's mind, mapping facial expressions with a set of meaningful behavior, as responses to the affective dimensions the customer is exhibiting. The Facial Action Coding System (FACS) (Ekman & Friesen, 1977) also used in a virtual scenario for releasing emotions in virtual agents (Bertacchini et al., 2007, 2013; Tavernise & Bertacchini, 2016) is the most common method used for recognizing emotions. FACS analyses and codes the facial muscular activity as Action Units (AUs) (Cohn, Ambadar, & Ekman, 2007). AUs combination separates between emotions. However, in recent years, other methods of shape analysis have emerged (Zhang,

Fig. 2. Cognitive architecture of the NAO's Robot. Sets of coordinated machine learning modules have been implemented, in order to allow the cognitive process management. A general system, which in turn uses a machine learning system, controls the flow of the process. This system synthesizes information, extracting the awareness of the robotic system, linked to the attentional process. Adapted to the shopping assistant robot from Adams et al., 2000.

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

Jiang, Farid, & Hossain, 2013). Geometric shapes, extracted from a set of points that represent parts or the regions of the face (eyes, mouth, etc.) describe facial features. All these methods generally are analyzed with Support Vector Classifier (SVC) or Linear Discriminant Analysis (LDA) for classification tasks (Hsieh, Wang, & Hsu, 2006). The six basic facial expressions of emotion are anger, happiness, fear, surprise, disgust and sadness. Unfortunately, the NAO robot does not provide an API module to perform this task. The most relevant information that provides are the 31 mark points of the face that could be sufficient to carry out the task successfully. For allowing a better interaction, the customer has to perceive emotions in the NAO robot. NAO does not have the ability to display facial expressions as humans do, because of its mechanical face. However, the emotions can be expressed using arms and hand movements and by conveying emotional cues and sounds (Miskam et al., 2014). By altering its body posture, showing affective postures and different colors in the eyes, kids are capable to recognize emotional states associated with the NAO behavior (Cohen, Looije, & Neerincx, 2011). Body language movements are preprogrammed using Choregraphe, and then combined with different eyes LEDs colors and sound for each emotion. The used patterns mimic human body language in real life situations. The robot will express feelings using body language. Emotions portrayed using the NAO's body postures are sadness, happiness and surprise (Fig. 3). Happiness is represented by clapping, playing a baby laugh sound and turning eyes green. For sadness, the robot mimics the baby crying sound, and turns eyes red. Finally, to express surprise, the robot moves head and chest backward, raises its arms, open the hands, playing a “WOW” sound and turning eyes yellow. The developed machine learning system is capable to detect the customer's emotions. To these outputs, the robot can adapt a set of possible appropriate emotional responses. First the robot learnt by recognizing the 6 main emotions on a relevant sample of human faces, in different real life situations, given as a training set for the machine learning system. Then, a coordinated module of the same system, created emergent associations with possible responses the robot employs during the customer-robot interaction. 5.2. Sentiment analysis of speech Sentiment Analysis is of great help to the shop owners to understand the level of customer satisfaction with the products of their shop. In addition, with the possibility to get accurate information on consumers’ reviews, they can develop new appealing characteristics of their products. The computational approach of Sentiment Analysis is based on two methods: 1) Machine learning methods and 2) Lexicon based methods. Analysis play a crucial aspect of the presence of words with high “impact polarity” (thus presenting totally positive or negative emotions). This analysis can be made in a multimodal way, linking texts, video, images and behaviors collected by the robot, while the customer-robot

Fig. 3. NAO's expression of emotions by using gestures.

7

rez-Rosas et al., 2013). interaction takes place (Pe The NAO robot can collect real time dialogues and use the sentiment analysis (De Lucia Castillo, Brito, & Santos, 2016) to improve its knowledge of the customer's mood. This machinelearning module ascertains how the customer feels and expresses emotions inside the communicative contexts, during the shopping interaction with the robotic assistant. Two methods have been used, the a priori method and the emergent one. For the first, a list of names, verbs, adjectives, adverbs and sentences for the six specific emotions, manually segmented from samples of assistantcustomers interaction in physical shops has been used. The terms referred to ‘anger’, ‘love’, ‘sadness’, ‘joy’ and ‘fear’ and ‘surprise’, with their related semantic fields, were selected. On this sample, the machine learning has been trained. The emergent module operated without previous knowledge, working on the same sample with only three main category such as positive, neutral and negative. With this machine-learning module, we have been able to correlate behavior patterns and emotions with speech. 5.3. Items localization, advertisement, offerings This part of the cognitive architecture of the robot connects to the shopping center space and to the retail items. As we have mentioned before, the Nao can be connected to the digital store of the shop, access the digital information the shop has on the customers, have a map of all the shops (if in a mall), have a detailed information on the right position of each item in the shelves, in the store. This last information amplifies the potential of interconnections if each item has a specific sensor, connected to a distributed information network. In this situation, the robot is able to compare prices of each item and to give to the customer the best advice. Advertisement and offering are specific functions related to the shopping assistance that the robot carries out by using the temporal organization of the customer behavior. If the visual investigation of the shelves goes beyond a certain amount of time, the robot promotes other items, thus motivating the customer with new information and eventually offering discounts and coupons. In this interaction, also dialogues are important to get decisions on both the inter-actors. The idea is to embed these modules into the Aldebaran Choreographe environment. By modifying, a box (or creating a new box) is possible to add directly customized scripts into Choregraphe. In this way, NAO CPU processes the flow of information to carry out the set of developed functions. All robot variables are accessible remotely through a simple read/write operation. The main advantages of this method are the possibility of using a host with better capacities than the NAO CPU in order to process big data quantities and the use of the simulated robot that lets do tests and changes in a safe and fast way, speeding up the development of the cognitive architecture of the system. The process begins with the command that allows the activation of the interaction sequence. The flow of cognitive processes is as follows: the robot is near the entrance door of the shop in a waiting state. If a customer enters, the interaction starts. The attentional module, in coordination with the low-level perceptual system, detects visual and auditory information about the customer. The machine learning modules are then loaded to be executed in an adaptive way, according to the customer‘s behavior. The High-Level Perception System, in coordination with the Behavioral System and the Motor System, activate the machine learning modules related to Face Detection, Gender Classification, Customer Identification, Emotion Recognition, Gesture Recognition and the full range of motor responses, in order to achieve the tasks in a social-like interactional style. All these modules are synchronized with the user purchasing behavior A model of the Customer-Robot interaction is reported in Fig. 4.

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

8

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

Furthermore, in the NAO software environment, events can be easily monitored by adding to the Flow diagram Space (of the Choreographe environment) the events that are connected to the real time sequence happening in the shopping setting with the customer, for a remote control with a human back-end operator, to overcome possible difficulty of the robot. 5.4. The empirical evaluation of the system The machine learning architecture, we have designed for implementing a robotic assistant is able to: (i) Adapt to new circumstances that we did not envision the robot behavior to the changing conditions of the retail interaction. (ii) Detect patterns in all sorts of data sources, coming from the user, from the expressions of the emotions, to the gestures of the non-verbal communication, to the speech sentiment analysis; (iii) Fuse data, thus creating a new category of behavior and new behavioral adaptive responses to unpredictable social conditions. (iv) Create a new behavior based on the patterns that have been extracted from data, thus become accustomed in the social interaction with the customer. (v) Make decisions based on the performance evaluation of the robot's adaptive behavior.

Currently, we have developed only a number of modules of the NAO's cognitive architecture. In particular, the recognition of the customer, searching for information on social media, representation and navigation in the environment, the customer recognition of the emotions and the related emotional response of the robot, the search for items in the store, the part of sentiment analysis of the speech of the customer has been tested in a simulated environment (Fig. 5). The robot operates as an interface to the user. The computational functions are carried out in a cloud-based application, communicating via Choreograph. In the cloud environment, most of the cognitive modules are machine-learning systems that have been developed with the software Mathematica. Programs can connect to other robotic systems as well. In what follows, we give details on the results of the implemented machine learning functions.

5.5. Customer identification This module may classify the Customer along many different dimensions. We have chosen three dimensions as particularly meaningful:  Classification based on the inference the robot performs on the available information, already present in the store digital database on the Customer and on the Social Media.

Fig. 4. The Customer-Robot-Interaction. The real life scenario has been used to sketch the main components of the interactors main actions (Dix, 2009). Hidden cognitive processes have been reported in red. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

9

operations. Connecting to the Internet and querying Google images, with the Customer's picture, or by using the Customer's name, given in the interaction preliminary presentation, the robot can connect directly to social media and get all sorts of information that guide future interaction. By Social Media, using a machine-learning module, the system can obtain information about customer's professional occupation, age, tastes, preferred music, art, colors, hobbies, etc. 5.6. Emotion recognition in the customer's facial expressions and speech sentiment analysis

Fig. 5. Simulation of the interaction between the customer and the robot as it could be in a real-life situation.

 Classification based on the representation of knowledge or skill acquired by the learner, with prior interaction with the Customer, which are stored in a repository of information of the robotic application on the cloud.  Classification in terms of the application domain of the performance system for which knowledge is acquired. The robot identifies a customer, takes a picture with its built-in camera, and performs a scan of the face. This operation lasts between 3 and 5 s to download the image and scan the face. The used module in Mathematica is FindFace. Assuming that the store has a temporary digital storage of customers' images, the robot checks the database to see if the face is already present with an identification label. In this case, we applied Logistic Regression in developing the machine-learning module. Logistic Regression is a family of classifier models that groups probabilities with logistic function, by using linear combination of features. Other names of this function are log-linear model, softmax regression, or maximum-entropy classifier., we have developed this recognition function of the Customer's face by using the Mathematica 11 software package (Wolfram, 2016). In Fig. 6, some scratches from the code: On the sample used, the system identifies correctly 11 out of 12 cases. Furthermore, the system gave information on already used classification function, the number of classes, the number of features to extract, and the number of training examples. The system identifies a customer with a high level of probability. In the example provided, the robot correctly identified 11 out of 12 cases, giving each Customer a recognition percentage. The classification system is extremely fast and, if the robot has the readiness to connect to the set of images stored in the database, it can realize the function real time, when the Customer enters the store. Given a set of information, for example a set of values for height and weight, associated with the categories of male and female, the robot can distinguish the Customer's gender, with the module GenderIdentify, and the language she is speaking, with the LanguageIdentify module (Fig. 6). In this way, the robot gets information that can be useful for the forthcoming interaction, especially related to the gender and linguistically based differences, connected to Customers purchasing behavior. When the Customer is recognized, and he is present in the store database, new information is combined with the existing one, enriching the repertoire of the Customer's images. The upgrade favors higher success rates in the Customer's recognition on later interactions. If the robot has not been able to locate the customer in the store database, the system begins a series of different

The purchase/sales experience goes through an interaction by which the shopping assistant and the Customer display emotions. We have trained three machine-learning systems. The first two trained in the emotion recognition, the third to sentiment analysis on the Customer's speech. Briefly describe the two systems. 5.6.1. Classification into two categories (positive/negative emotions) For the recognition of this two category of emotions, we have trained a machine learning using a logistic regression model. We use 50 images for each category, to train the system and another 20 images to verify the results and infer if the system learnt to classify. The percentages of recognition are, in this particular example, 70% of the positive category and 90% of the negative category. When a product is shown to the user, the process implies an emotion that can be recorded by the robot and, appropriately classified. This coarse information on the customer's immediate reaction on items could be useful to get further information on the customer's ultimate preferences, updating an existing profile. The dialogue has been analyzed with a classification system with two values, for getting information on the user's sentiment. Also for this type of analysis, the machine learning system returns a classification based on positive/negative speech words, using the machine learning method, and a cloud of words, when the Lexicon method is used (Fig. 7). 5.7. Classification in six basic emotions Both emotions and sentiment analysis classifications in two broad categories seem to be very simple, even useful in a rough first investigation. For improving the robot skills in emotion recognition, we created a machine learning for six different emotions (anger, happiness, fear, surprise, disgust and sadness). We selected a huge set of images for the training and the recognition test, some of which is displayed in Fig. 8. The training process is analogous to that realized in the former case. But, having increased the number of emotional categories, the recognition rate is less robust, giving unsatisfactory results, as it is possible to see from the following values: {0.25, 0.5, 0.4, 0.5, 0.35, 0.45}. Only in the case of “disgust” and “happy”, the system reaches 50% of the recognition rate. When, however, a special case of the image is given to the system, the recognition process improves, with the following values: {0.771429, 0.828571, 0.828571, 0.857143, 0.8, and 0.828571}. The values of the classifier might certainly improve, finding a more accurate training set of images for each emotional category or using real, known customers’ expressions. 6. Discussion In the complex digital environment we live in, traditional retail shops are struggling to survive, while smart retailing system is developing at a faster pace than ever before, in the form of electronic commerce (e-commerce) (Saini & Johnson, 2005), mobile commerce (m-commerce), multi-channel systems (Pantano &

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

10

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

Fig. 6. Scratches from the codes for the FindFace, GenderIdentity, and LanguageIdentity machine-learning modules.

Viassone, 2015), or omni-channel approach (Kumar et al., 2016). Traditional shops are becoming bricks-and-clicks retailers, and are

gaining profits through their ability to make management decisions that go in the direction to adapt to the new e-business models, to

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

Machine Learning based method

11

Lexicon based method

Fig. 7. Sentiment analysis evaluation results in a short dialogue between a customer and a sales assistant based on the machine learning method and a conversation on items related to computer products, based on the Lexicon method. This part of the system could be further automated, as it is possible to realize the same system, connecting to the Google Chrome software, and getting results on text data (https://cloud.google.com/prediction/docs/sentiment_analysis). This could mean that it will be also possible to connect the robot to the Google Cloud service in order to get more efficient and robust services, to be used in the retail sector.

2. Customers' will get advantage, having a faster and intelligent assistant to shop with; 3. Low cost of the system, connected to the cloud, which will use the same technology and the same infrastructure and IT management practices for the robot platform; 4. Sharp decrease in costs, using the intelligence of the robotic system, reducing many of the personnel costs, but in turn, also improving the staff's quality of life; 5. The easiness of the robotic technology will allow to access social media by machine learning systems, thus returning to the customer and the retailer various service outputs, and allowing the creation of a multi-channel strategy, concerning both marketing and management processes. These benefits could help the survival of certain locations, allowing a gentle transition from the traditional shops to smart connected retail systems. Finally, the adoption of a robotic platform could help retailers to understand better their own organizational system and that of their competitors, by adjusting their business model in order to obtain maximum benefit from the Internet. 7. Conclusions and future work

Fig. 8. Scratches of the training sets for the six basic emotions, the evaluation information made by Logistic regression, with results and a decisive example.

reach millions of consumers at low cost, with many different and mixed convergent dynamics (Wind and Mahajan, 2002). Given the complexity of the variables involved in the growth of the retail phenomena, scholars in the field are replacing linear retail models with nonlinear models (Dou & Ghose, 2006). In addition, several physical and managerial changes are occurring in the retail sector, as Internet sales will continue to grow (Pantano, 2014) and consumers are more and more enthusiastic to shopping online by their internet devices and to find in the physical shops what they have chosen on line. From the managerial point of view, adopting a robotic system such as that described in this paper could have the following beneficial effects on the retail sector: 1. By exploiting the attractiveness of the robotic technology, curious customers will be pushed to go into shops and buy products, under the guidance of the robot, thus increasing sales;

The need of moving robots from the back end to the front end of the shops to help people in everyday life is one of the major contribution that robotics can offer in the future. The shop environment, populated by humans and robots, but also connected to the social media, is changing the consumer's experience. We have built a robotic system for retailing that is able to learn and constantly modify its knowledge about customers, thereby providing a definitely improved assistance to shopping. With the ability to connect to the store's database, which already contains a digital data repository on users, or simply by logging on to social media, the robot is able to profile the user, revealing her preferences, along with her cultural context. This profiling is the basis for the subsequent dialog. If the robot has already interacted physically with the client, it already knows her tastes and needs and, therefore, potentially, may propose some articles of the store and/or explore with her other items to buy. In this way, not only the robot can help, but even it could make predictions on what the customer wants to buy, or will buy in the future. This way of interaction with robots definitely changes the retailing scenario. A robot could be companion to shop with. The stress of this system is on emotional behavior, expressed by both interactors. The possibility for the robot to extract emotional data and linguistic data on emotions from the customer during the shopping interaction, and information from the Social Media allows interacting with a human-like

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

12

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

fashion, responding with emotional behavior as well. In addition, the robot has a deep influence on the customer emotions. With the possibility to react to the customer emotions with its set of emotions based on motor action, issuing childlike sounds and vocalizations, a strange connection, mixed with empathy and anticipation is creating in the customer's expectation. The robot collects the customer's personal requirements and preferences, and interacts with her in an empathic way, especially by joint attention (Tomasello, 1995), expressing, recognizing and responding to the emotional behavior as a whole, and realizing a human-like conversation by using gestures and verbal behavior (Lauria, 2002; Sugiyama et al., 2006). By using its built-in camera, and its auditory sensors, the robot is able to focus, after being trained on samples of already classified images, the different emotional expressions that ultimately form the base that moves the purchase of the buyer. Sentiment analysis of the customer speech, linked to a machine-learning module that is capable to train the robot to correctly respond to the customer dialogue improves the potential to “understand” or to “grasp” the user intentions. This process gives the idea that the robot might create a high-level comprehension of the task, both customers and robots are carrying out, thus allowing the emergence of a robotic “theory of mind” of the customers' intentions (Scassellati, 2002; Wiltshire, Warta, Barber, & Fiore, 2016). In this way, the cooperation of the cognitive architecture of the robot assistant, developed by a series of coordinated machine learning modules, allows the robot to share the customer's high level goals: to buy the items she has in mind and to be satisfied of her purchase. The NAO robot provides help and offers assistance, recognizing the turn taking of the interaction, also when the customer is hindered in a complex task. In the empirical experimentation, to test the ability of the robot to interact in a real life situation as evidence of these positive feelings, a huge number of people wanted to interact with it. They reported that the system was helpful and it was funny to interact with a human-like animated robot. Someone said that the robot also has some type of humor (Tay, Low, Ko, & Park, 2016). These affirmations, if confirmed by an extensive experimentation, could mean that it will be possible to create a sort of robotic friend, with which to go shopping together (both in the physical or simulated environments), also by a remote dialogue, enjoying to receive information and/or guides, also for distance purchasing. In synthesis, with the possibilities the new technologies allow, we are witnessing a shift from face-to-face sales system to a system of remote sales via e-commerce. Under the pressure of new technological innovation, traditional stores are transforming. Robotics might offer some new possibility to incorporate the experience of the traditional shop with e-commerce capabilities. The scenario in which the customer-robot interaction takes place is interconnected, adaptable, dynamic, embedded and intelligent. Processors and sensors are integrated into everyday objects, it is possible to communicate directly with the shops' items that are displayed on the shelves, many devices and shops' equipment are also connected to the net and the social media, and these may communicate with each other and with other people's devices and equipment. This environment is sensitive to the consumer needs, capable to anticipate the customers' thoughts and behavior. Future work will be on developing the system in order to improve the machine learning organization to test the robotic architecture in real life situation, in a controlled experiment. As regards to the customers’ acceptance of the implemented robotic system, the empirical results of this research are insufficient, since we do not collected yet consistent data of the system in interaction with a representative sample of customers. Future developments may give results, which should be assessed for

different types of stores, various types of goods, considering different classes of users, as for example in Goudey and Bonnin (2016). Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. References (2016) Executive Summary World Robotics 2016-Service Robots. Retrieved at: http://www.diag.uniroma1.it/~deluca/rob1_en/2016_WorldRobotics_ ExecSummary_Service.pdf. Abdechiri, M., Faez, K., Amindavar, H., & Bilotta, E. (2016). The chaotic dynamics of high-dimensional systems. Nonlinear Dynamics, 87, 2597e2610. http:// dx.doi.org/10.1007/s11071-016-3213-3 (in press). Adams, B., Breazeal, C., Brooks, R. A., & Scassellati, B. (2000). Humanoid robots: A new kind of tool. IEEE Intelligent Systems and Their Applications, 15(4), 25e31. Aldebaran Robotics. (2012). Nao software documentation [Online]. Retrieved at: http://doc.aldebaran.com/1-14/index.html. Alissandrakis, A., Nehaniv, C., & Dautenhahn, K. (2007). Correspondence mapping induced state and action metrics for robotic imitation. IEEE Transactions on Systems, Man and Cybernetics, 37(2), 299e307. Balaji, M. S., & Roy, S. K. (2016). Value co-creation with Internet of things technology in the retail industry. Journal of Marketing Management, 1e25. Batty, M., & Taylor, M. J. (2003). Early processing of the six basic facial emotional expressions. Cognitive Brain Research, 17(3), 613e620. Baxter, P., & Trafton, J. G. (2016, March). Cognitive Architectures for social humanrobot interaction. In 2016 11th ACM/IEEE international conference on humanrobot interaction (HRI) (pp. 579e580). Beckers, R., Holland, H. D., & Deneuborg, J. L. (1996). From local to global tasks: Stigmergy and collective robotics. In Artificial life IV (Vol. 181). Bertacchini, F., Bilotta, E., Gabriele, L., Pantano, P., & Servidio, R. (2010). Using Lego MindStorms in higher education: Cognitive strategies in programming a quadruped robot. In Workshop proceedings of the 18th international conference on computers in education, ICCE (pp. 366e371). Bertacchini, F., Bilotta, E., Gabriele, L., Vizueta, D. E. O., Pantano, P., Rosa, F., et al. (2013, September). An emotional learning environment for subjects with autism spectrum disorder. In Interactive collaborative learning (ICL), 2013 international conference on (pp. 653e659). IEEE. Bertacchini, P. A., Bilotta, E., Pantano, P., Battiato, S., Cronin, M., Di Blasi, G., et al. (2007, February). Modelling and animation of theatrical greek masks in an authoring system. In Eurographics Italian chapter conference (pp. 191e198). Billard, A. G., Calinon, S., & Dillmann, R. (2016). Learning from humans. In Springer handbook of robotics (pp. 1995e2014). Springer International Publishing. Bilotta, E., Cutrí, G., & Pantano, P. (2006). Evolving robot's behavior by using CNNs. In International conference on simulation of adaptive behavior (pp. 631e639). Berlin, Heidelberg: Springer. Bilotta, E., Gabriele, L., Servidio, R., & Tavernise, A. (2009). Edutainment robotics as learning tool. In Transactions on edutainment III (pp. 25e35). Berlin Heidelberg: Springer. Bilotta, E., Pantano, P., & Vena, S. (2016). Speeding up cellular neural network processing ability by embodying memristors. IEEE Transactions on Neural Networks and Learning Systems. http://dx.doi.org/10.1109/TNNLS.2015.2511818 (in press). Breazeal, C. (2003). Toward sociable robots. Robotics and autonomous systems, 42(3), 167e175. Breazeal, C. (2009). Role of expressive behavior for robots that learn from people. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1535), 3527e3538. Breazeal, C., & Brooks, R. (2005). Robot emotion: A functional perspective. Who needs emotions? The brain meets the robot (pp. 271e310). Oxford University Press. Breazeal, C., & Scassellati, B. (2002). Robots that imitate humans. Trends in cognitive sciences, 6(11), 481e487. Brooks, R. A. (1999). Cambrian intelligence: The early history of the new AI. Cambridge, MA: The MIT Press. Cahn, R. W., & Lifshitz, E. M. (Eds.). (2016). Concise encyclopedia of materials characterization. Elsevier. Che, H., Chen, X., & Chen, Y. (2012). Investigating effects of out-of-stock on consumer stock keeping unit choice. Journal of Marketing Research, 49(4), 502e513. Chella, A., Cossentino, M., Sabatucci, L., & Seidita, V. (2006). Agile PASSI: An agile process for designing agents. International Journal of Computer Systems Science & Engineering, 133e144. Cohen, I., Looije, R., & Neerincx, M. A. (2011, March). Child's recognition of emotions in robot's face and body. In Proceedings of the 6th international conference on Human-robot interaction (pp. 123e124). ACM. Cohn, J. F., Ambadar, Z., & Ekman, P. (2007). Observer-based measurement of facial expression with the facial action coding system. The handbook of emotion elicitation and assessment (pp. 203e221). Dautenhahn, K. (2001). Socially intelligent agents - the human in the loop. IEEE

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14 Transactions on Systems, Man and Cybernetics, A: Systems and Humans, 31, 345e348. Dautenhahn, K. (2002). In C. L. Nehaniv (Ed.), Imitation in animals and artifacts (pp. 211e228). Cambridge, MA: MIT Press. Dautenhahn, K. (2007a). Socially intelligent robots: Dimensions of human-robot interaction. Phil. Trans. R. Soc. B, 362, 679e704. Dautenhahn, K. (2007b). Methodology and themes of human-robot interaction: A growing research field. International Journal of Advanced Robotic Systems, 4(1), 103e108. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982e1003. De Lucia Castillo, F., Brito, J. O., & Santos, C. A. (2016). Animated words clouds to view and extract knowledge from textual information. In Proceedings of the 22nd Brazilian symposium on multimedia and the web (pp. 127e134). ACM. Demeris, Y., & Khadhouri, B. (2006). Hierarchical attentive multiple models for execution and recognition of actions. Robotics and Autonomous Systems, 54, 361e369. Dix, A. (2009). Human-computer interaction (pp. 1327e1331). US: Springer. Dou, W., & Ghose, S. (2006). A dynamic nonlinear model of online retail competition using cusp catastrophe theory. Journal of Business Research, 59(7), 838e848. Ekman, P., & Friesen, W. V. (1977). Facial action coding system. Oxford University Press. Elrod Scott, and Eric Shrader (2005) Smart floor tiles/carpet for tracking movement in retail, industrial and other environments. U.S. Patent Application No. 11/ 236,681. Eyssel, F. (2017). An experimental psychological perspective on social robotics. Robotics and Autonomous Systems, 87, 363e371. Fellous, J. M., & Arbib, M. A. (2005). Who needs emotions? The brain meets the robot. Oxford University Press. Fitzpatrick, P., Harada, K., Kemp, C. C., Matsumoto, Y., Yokoi, K., & Yoshida, E. (2016). Humanoids. In Springer handbook of robotics (pp. 1789e1818). Springer International Publishing. Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and autonomous systems, 42(3), 143e166. Francis, J., Drolia, U., Mankodiya, K., Martins, R., Gandhi, R., & Narasimhan, P. (2013, October). MetaBot: Automated and dynamically schedulable robotic behaviors in retail environments. In Robotic and sensors environments (ROSE), 2013 IEEE international symposium on (pp. 148e153). Gabriele, L., Tavernise, A., & Bertacchini, F. (2012). Active learning in a robotics laboratory with university students. Cutting-edge technologies in higher education, 6, 315e339. € rer, B., Salah, A. A., & Akın, H. L. (2016). An autonomous robotic exercise tutor for Go elderly people. Autonomous Robots, 1e22. Goudey, A., & Bonnin, G. (2016). Must smart objects look human? Study of the impact of anthropomorphism on the acceptance of companion robots. Recherche et Applications en Marketing (English Edition), 31(2), 2e20. €lt, J. (2016). The future of retailing. Journal of Grewal, D., Roggeveen, A. L., & Nordfa Retailing. http://dx.doi.org/10.1016/j.jretai.2016.12.008 (in press). Guo, Y., & Hu, J. (2014). Research on business model innovation of e-commerce era. International Journal of Business and Social Science, 5(8). Heckel Thomas, and Robyn R. Schwartz (2008). Retail environment. U.S. Patent Application No. 7,357,316. Heger, O., Kampling, H., & Niehaves, B. (2016). Towards a theory of trust-based acceptance of affective technology. In Twenty-fourth european conference on _ information systems (ECIS), Istanbul, Turkey, 2016. Holland, O. (1997, July). Grey walter: The pioneer of real artificial life. In Proceedings of the 5th international workshop on artificial life (pp. 34e44). Cambridge: MIT Press. Hsieh, P. F., Wang, D. S., & Hsu, C. W. (2006). A linear feature extraction for multiclass classification problems based on class mean and covariance discriminant information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(2), 223e235. Huijnen, C. A., Lexis, M. A., Jansens, R., & Witte, L. P. (2016). Mapping robots to therapy and educational objectives for children with autism spectrum disorder. Journal of autism and developmental disorders, 46(6), 2100e2114. IFR Executive Summary World Robotics. (2016). Service robots (2016) [Online]. Retrieved at: http://www.ifr.org/service-robots/statistics/. Kamei, K., Ikeda, T., Shiomi, M., Kidokoro, H., Utsumi, A., Shinozawa, K., … Hagita, N. (2012). Cooperative customer navigation between robots outside and inside a retail shopdan implementation on the ubiquitous market platform. Annals of telecommunications-annales des t el ecommunications, 67(7e8), 329e340. Kamei, K., Shinozawa, K., Ikeda, T., Utsumi, A., Miyashita, T., & Hagita, N. (2010, November). Recommendation from robots in a real-world retail shop. In International conference on multimodal interfaces and the workshop on machine learning for multimodal interaction (p. 19). ACM. Kaufman, L., & Rousseeuw, P. J. (1990). Partitioning around medoids (program pam). In Finding groups in data: An introduction to cluster analysis (pp. 68e125). Wiley Interscience. Keeling, K., Keeling, D., & McGoldrick, P. (2013). Retail relationships in a digital age. Journal of Business research, 66(7), 847e855. € rg, C., Kohlhammer, J., & Melançon, G. Keim, D., Andrienko, G., Fekete, J. D., Go (2008). Visual analytics: Definition, process, and challenges. In Information visualization (pp. 154e175). Berlin, Heidelberg: Springer. Kejriwal, N., Garg, S., & Kumar, S. (2015). Product counting using images with

13

application to robot-based retail stock assessment. In 2015 IEEE international conference on technologies for practical robot applications (TePRA) (pp. 1e6). IEEE. Khan, J. A., & Jain, N. (2016). A survey on intrusion detection systems and classification techniques. IJSRSET, 2(5), 202e208. Kooij, J. F., Englebienne, G., & Gavrila, D. M. (2016). Mixture of switching linear dynamics to discover behavior patterns in object tracks. IEEE transactions on pattern analysis and machine intelligence, 38(2), 322e334. Kotsiantis, S. B. (2007). Supervised machine learning: A review of classification techniques. Informatica, 31(2007), 249e268. Kumar, S. (2016). Robotics-as-a-Service: Transforming the future of retail [Online]. Retrieved at: http://www.tcs.com/resources/white_papers/Pages/Robotics-asService.aspx. Kumar, V., Anand, A., & Song, H. (2016). Future of retailer Profitability: An organizing framework. Journal of Retailing. http://dx.doi.org/10.1016/j.jretai.2016.11.003 (in press). Kumaran, D., Hassabis, D., & McClelland, J. L. (2016). What learning systems do intelligent agents need? Complementary learning systems theory updated. Trends in Cognitive Sciences, 20(7), 512e534. Kumar, S., Sharma, G., Kejriwal, N., Jain, S., Kamra, M., Singh, B., et al. (2014, April). Remote retail monitoring and stock assessment using mobile robots. In 2014 IEEE international conference on technologies for practical robot applications (TePRA) (pp. 1e6). IEEE. Lauria, S. (2002). Mobile robot programming using natural language. Robotics and Autonomous Systems, 38(3/4), 171e181. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436e444. Lehmann, H., Walters, M., Dumitriu, A., May, A., Koay, K., Saez-Pons, J., et al. (2013). Artists as HRI pioneers: A creative approach to developing novel interactions for living with robots. In G. Herrmann, M. Pearson, A. Lenz, P. Bremner, A. Spiers, & U. Leonards (Eds.), Vol. 8239. Social robotics (pp. 402e411). Berlin: Springer International Publishing. Lecture Notes in Computer Science. Lin, T., Baron, M., Hallier, B., Raiti, M., Olivero, S., Johnson, S., et al. (2016, April). Design of a low-cost, open-source, humanoid robot companion for large retail spaces. In Systems and information engineering design symposium (SIEDS), 2016 (pp. 66e71). IEEE. Mankodiya, K., Martins, R., Francis, J., Garduno, E., Gandhi, R., & Narasimhan, P. (2013). Interactive shopping experience through immersive store environments. In International conference on design, user experience, and usability (pp. 372e382). Berlin Heidelberg: Springer. Marsland, S. (2015). Machine learning: An algorithmic perspective. New York: Chapman & Hall: CRC Press. McColl, D., & Nejat, G. (2014). Recognizing emotional body language displayed by a human-like social robot. International Journal of Social Robotics, 6(2), 261e280. Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams Engineering Journal, 5(4), 1093e1113. Merrick, K. (2017). Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41, 38e55. Michalski, R. S. (1983). A theory and methodology of inductive learning. In Machine learning (pp. 83e134). Springer Berlin Heidelberg. Mick, D. G., & Fournier, S. (1998). Paradoxes of technology: Consumer cognizance, emotions, and coping strategies. Journal of Consumer research, 25(2), 123e143. Minato, T., Shimada, M., Ishiguro, H., & Itakura, S. (2004, May). Development of an android robot for studying human-robot interaction. In International conference on Industrial, engineering and other applications of applied intelligent systems (pp. 424e434). Springer Berlin Heidelberg. Min, H., Luo, R., Zhu, J., & Bi, S. (2016). Affordance research in developmental robotics: A survey. IEEE Transactions on Cognitive and Developmental Systems. http://dx.doi.org/10.1109/TCDS.2016.2614992 (in press). Miskam, M. A., Shamsuddin, S., Samat, M. R. A., Yussof, H., Ainudin, H. A., & Omar, A. R. (2014, November). Humanoid robot NAO as a teaching tool of emotion recognition for children with autism using the Android app. In Micronanomechatronics and human science (MHS), 2014 international symposium on (pp. 1e5). IEEE. Murray, M. J. (2016). 9 Factor analysis, cluster analysis, and nonparametric research methods for heterodox economic analysis. In Handbook of research methods and applications in heterodox economics (pp. 190e209). Nehaniv, C. L., & Dautenhahn, K. (Eds.). (2007). Imitation and social learning in robots, humans and animals. Cambridge: Cambridge University Press. Nur, K., Morenza-Cinos, M., Carreras, A., & Pous, R. (2015). Projection of RFIDobtained product information on a retail stores indoor panoramas. IEEE Intelligent Systems, 30(6), 30e37. Otero, M., Schweigmann, N., & Solari, H. G. (2008). A stochastic spatial dynamical model for Aedes aegypti. Bulletin of Mathematical Biology, 70, 1297e1325. Pantano, E. (2014). Innovation drivers in retail industry. International Journal of Information Management, 34, 344e350. Pantano, E., & Priporas, C. V. (2016). The effect of mobile retailing on consumers' purchasing experiences: A dynamic perspective. Computers in Human Behavior, 61, 548e555. Pantano, E., & Viassone, M. (2015). Engaging consumers on new integrated multichannel retail environments: Challenges for retailers. Journal of Retailing and Consumer Services, 25, 106e114. Parasuraman, A. (2000). Technology readiness index (TRI): A multiple-item scale to measure readiness to embrace new technologies. Journal of Service Research, 2(May), 307e320. Parasuraman, A., & Colby, C. L. (2015). An updated and streamlined technology

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064

14

F. Bertacchini et al. / Computers in Human Behavior xxx (2017) 1e14

readiness index: TRI 2.0. Journal of service research, 18(1), 59e74. rez-Rosas, V., Mihalcea, R., & Morency, L. P. (2013). Utterance-level multimodal Pe sentiment analysis. In ACL (Vol. 1, pp. 973e982). Picard, R. W., & Picard, R. (1997). In Affective computing (Vol. 252). Cambridge: MIT press. Pratiba, D. (2013). Incorporating human behavioral patterns in big data, text analytics. IJRCCT, 2(10), 976e978. Proia, A. A., Simshaw, D., & Hauser, K. (2015). Consumer cloud robotics and the fair information practice principles: Recognizing the challenges and opportunities ahead. Minn. J.L. Sci. & Tech, 16(1), 145. Saini, A., & Johnson, J. L. (2005). Organizational capabilities in e-commerce: An empirical investigation of e-brokerage service providers. Journal of the Academy of Marketing Science, 33(3), 360e375. Saltos, R., & Weber, R. (2016). A roughefuzzy approach for support vector clustering. Information Sciences, 339, 353e368. Scassellati, B. (2002). Theory of mind for a humanoid robot. Autonomous Robots, 12(1), 13e24. Schaal, S. (1999). Is imitation learning the route to humanoid robots? Trends Cogn. Sci., 3, 233e242. Sharma, A., Khosla, A., & Rao, Y. (2016). Technological tools and interventions to enhance learning in children with autism. Supporting the Education of Children with Autism Spectrum Disorders, 204. Sugiyama, O., Kanda, T., Imai, M., Ishiguro, H., Hagita, N., & Anzai, Y. (2006). Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connection Science, 18(4), 379e402. Takayama, L., Ju, W., & Nass, C. (2008). Beyond dirty, dangerous and dull: What everyday people think robots should do. In Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction (pp. 25e32). Tavernise, A., & Bertacchini, F. (2016). Designing educational paths in virtual worlds for a successful hands-on Learning: Cultural scenarios in NetConnect project. In Handbook of research on 3-D virtual environments and hypermedia for ubiquitous learning (pp. 148e167). IGI Global.

Tay, B. T., Low, S. C., Ko, K. H., & Park, T. (2016). Types of humor that robots can play. Computers in Human Behavior, 60, 19e28. Tinwell, A., Grimshaw, M., Nabi, D. A., & Williams, A. (2011). Facial expression of emotion and perception of the Uncanny Valley in virtual characters. Computers in Human Behavior, 27(2), 741e749. Tkaczynski, A. (2017). Segmentation using two-step cluster analysis. In Segmentation in social marketing (pp. 109e125). Singapore: Springer. Tomasello, M. (1995). Joint attention as social cognition. In C. Moore, & P. Dunham (Eds.), Joint Attention: Its origins and role in development (pp. 103e130). Hillsdale, NJ: Erl-baum. Ultes, S., Dikme, H., & Minker, W. (2016). Dialogue management for user-centered adaptive dialogue. In Situated Dialog in Speech-Based Human-Computer Interaction (pp. 51e61). Springer International Publishing. Vojtovi c, S., Navickas, V., & Gruzauskas, V. (2016). Strategy of sustainable competitiveness: Methodology of real time customers' segmentation for retail shops. Journal of Security & Sustainability Issues, 5(4). Wiltshire, T. J., Warta, S. F., Barber, D., & Fiore, S. M. (2016). Enabling robotic social intelligence by engineering human social-cognitive mechanisms. Cognitive Systems Research. http://dx.doi.org/10.1016/j.cogsys.2016.09.005 (in press). Wind, Y., & Mahajan, V. (2002). Convergence marketing. Journal of Interactive Marketing, 16(2), 64e79. Wolfram, S. (2016). Wolfram language & system. Documentation Center. [Online]. Retrieved at: http://reference.wolfram.com/language/. Zhang, L., Jiang, M., Farid, D., & Hossain, M. A. (2013). Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot. Expert Systems with Applications, 40(13), 5160e5168. Zhang, J., Lyu, Y., Roppel, T., Patton, J., & Senthilkumar, C. P. (2016, March). Mobile robot for retail inventory using RFID. In 2016 IEEE international conference on Industrial technology (ICIT) (pp. 101e106). IEEE. Zimmerman, T. G. (2010). U.S. Patent No. 7,693,757. Washington, DC: U.S. Patent and Trademark Office.

Please cite this article in press as: Bertacchini, F., et al., Shopping with a robotic companion, Computers in Human Behavior (2017), http:// dx.doi.org/10.1016/j.chb.2017.02.064