Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity

Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity

Author’s Accepted Manuscript Enhancing user experience with conversational agent for movie recommendation: Effects of selfdisclosure and reciprocity S...

982KB Sizes 124 Downloads 375 Views

Author’s Accepted Manuscript Enhancing user experience with conversational agent for movie recommendation: Effects of selfdisclosure and reciprocity SeoYoung Lee, Junho Choi www.elsevier.com/locate/ijhcs

PII: DOI: Reference:

S1071-5819(17)30019-8 http://dx.doi.org/10.1016/j.ijhcs.2017.02.005 YIJHC2107

To appear in: Journal of Human Computer Studies Received date: 6 November 2016 Revised date: 4 February 2017 Accepted date: 19 February 2017 Cite this article as: SeoYoung Lee and Junho Choi, Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity, Journal of Human Computer Studies, http://dx.doi.org/10.1016/j.ijhcs.2017.02.005 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity Seo Young Lee*, Junho Choi Graduate School of Information, Yonsei University, Seoul. S. Korea *

Corresponding author. [email protected]

ABSTRACT

This paper investigates how user satisfaction and intention to use for an interactive movie recommendation system is determined by communication variables and relationship between conversational agent and user. By adopting the Computers-Are-Social-Actors (CASA) paradigm and uncertainty reduction theory, this paper examines the influence of selfdisclosure and reciprocity as key communication variables on user satisfaction. A two-way ANOVA test was conducted to analyze the effects of self-disclosure and reciprocity on user satisfaction with a conversational agent. The interactional effect of self-disclosure and reciprocity on user satisfaction was not significant, but the main effects of self- disclosure and reciprocity were both significant. PLS analysis results showed that perceived trust and interactional enjoyment are significant mediators between communication variables and user satisfaction. In addition, reciprocity is a stronger variable than self-disclosure in predicting relationship building between an agent and a user. And user satisfaction is an influential factor of Intention to Use. These findings have implications from both practical and theoretical perspective.

Keywords

CASA, Uncertainty reduction theory, Self-disclosure, Reciprocity, Conversational agent, User experience, Speech-based interaction

1. Introduction Conversational agents (CAs) are becoming ever more popular with Microsoft’s Cortana, Google Now, Apple’s Siri, and Amazon's Alexa, vying to establish voice recognition as the mainstream mode of human–computer interaction (HCI). According to consumer analytics, (Barrett, 2013), "Roughly one-fifth of broadband users surveyed use either Siri or Google Now". These digital assistants operate like a personal secretary or a concierge, assisting users with a many functions such as search queries, making phone calls, scheduling, reading books etc. Speech recognition is also an existing feature on many smart TVs. This paper suggests that smart TV agents would be more popular if they could establish an intimate relationship with and make personalized content recommendations to viewers. Recommendation system technology is quite a common feature with content service providers such as Netflix and Amazon. However, these market leaders rely purely on the viewing or transaction history of a user to make recommendations. But this approach is ineffective for initial interactions between users and agent and is unable to take into account a user's current mood. It also takes time to accumulate sufficient personal data to derive relevant recommendations. It would be quicker and more effective if users could reveal their preferences and interests to an agent (Satzger, Endres, Kiebling, 2006). It is also proposed that recommendations could be 1

enhanced or made more efficient if the agent is able to build a relationship with the user. Having a natural user interface capable of making curated recommendations could provide smart TV makers with an important competitive advantage. Thus, there is need for more research on conversational agents capable of natural, human-like interaction. In this study, a conversational agent will use self-disclosure and reciprocity to garner information to make movie recommendation and to facilitate relationship development in the process. Moon (2000) demonstrated that humans could develop intimacy with computers through the use of self- disclosure and reciprocity. This paper proposes that if the agent gains the user’s trust, intimacy and interactional enjoyment, this will enhance user satisfaction and encourage intention to use. The tendency to treat interactions with agents to be inherently social and natural is based on the Computers are Social Actors theory (CASA) (Reeves & Nass, 1996). To test the hypotheses, the proposed model was validated through an experiment. Accordingly, this paper investigates the following research questions: 1) How does self-disclosure between a user and a conversational agent affect user satisfaction and Intention to Use? 2) How does reciprocity between a user and a conversational agent affect user satisfaction and Intention to Use? 3) How do the levels of intimacy, trust, and interactional enjoyment between a user and a conversational agent affect user satisfaction and Intention to Use? This study consists of two parts: Study 1 is concerned with a two-way analysis of variance (ANOVA) of how independent variables, self- disclosure, and reciprocity affect user satisfaction and intention to use. Study 2 uses partial least squares regression (PLS) to investigate the relationship among all variables in the research model. Verification of these relationships was based on testing hypotheses derived from theoretical relevance. 2

2. Literature review 2.1. CASA Pursuant to the media equation theory first proposed by Reeves and Nass (1996), “interactions between humans and computers are fundamentally social and natural.” CASA theory states that humans interacting with computers exhibit the same social behaviors as in HHI (Human to human interaction) (Reeves & Nass, 1996). This is because humans tend to apply expectations and social rules to computers (Reeves and Nass, 1996), and rely on social categories e.g social status, ethnicity, gender etc, when interacting with computers (Nass and Moon, 2000). 2.2. Uncertainty Reduction Theory According to Berger & Calabrese's (1975) Uncertainty Reduction Theory, initiating relationships begin with both parties attempting to elicit information about each other. The start of a relationship is fraught with many uncertainties. Both parties will attempt to elicit self-disclosure which can either be in-depth or in breadth information. The high level of uncertainty also induces high level of reciprocity. This implies that interpersonal communication variables, self-disclosure and reciprocity are essential factors for initiating and development of relationship. Self-disclosure is the process of revealing thoughts, opinions, emotions or personal information to others (Pearce & Sharp, 1973). Self-disclosure assists in reducing uncertainty between two parties and the recipient of personal information is expected to reciprocate. If either party does not reciprocate then relationship development is less likely to succeed (Sprecher, et.al, 2013). Reciprocity is the tendency to repay any benefits, gift, and treatment or favors that a person may have received from another person (Sprecher, et. al., 2013). In human agent relationship, reciprocity can be understood as the perception of give-and-take in interactions between 3

human and agent (Weiss & Tscheligi, 2012). It is an important social cue for systems where either the agent or human is requesting assistance (Weiss & Tscheligi, 2012). Martelaro, Nneji, Ju, & Hinds (2016) showed that when an intelligent robot makes vulnerable disclosures, i.e. disclosures which reveal a weakness or fallibility, it increase feelings of trust and companionship. It also encouraged respondents to disclose more than if the interaction composed of non-vulnerable disclosures. They also found that respondents may disclose less if the intelligent robot was less expressive and made less vulnerable disclosures. Accordingly, these two variables are chosen in this paper to interact with a conversational agent. This raises some questions. Will self-disclosure and reciprocity reduce uncertainty in interactions between human and Smart TV recommendation agent? Will reduced uncertainty increase intimacy, trust, and enjoyment as a consequence? 2.2.1. Self-disclosure Self-disclosure is a vital for developing and maintaining relationships (Collins & Miller, 1994). According to Collins and Miller (1994), there are three important disclosure-liking effects (a) Making intimate disclosures will make people like you more than just engaging in non-intimate disclosures (b) People will be encouraged to disclose more if they initially like someone (c) Disclosing yourself to someone will make you like that person. Humans can also develop a level of trust and, hence, a relationship with a computer through the process of disclosure and reciprocity (Moon, 2001). Thus in this study, we manipulate self-disclosure to ascertain the effect on developing relationship with a conversational agent. 2.2.2. Reciprocity

4

When two people meet, the ability to build rapport is dependent on both interactants reciprocating equally in a conversation (Collins & Miller, 1994; Sprecher, et. al.). An interaction where one party fails to reciprocate is less likely to have a positive outcome (Sprecher et al., 2013). People tend to be more attracted to those with whom they are disclosing to or are receiving disclosure from (Collins & Miller, 1994). In sum, the norm of reciprocity may be an elemental human behavior (Gouldner, 1960). The pace at which disclosure and reciprocity occurs depends on the state of the relationship. Reciprocity creates the illusion that the agent is realistic and assists with developing intimacy and emotional attachment (Mark & Becker, 1999). Thus in this study, we manipulate reciprocity to examine the effect on relationship development with a conversational agent. 2.3. Variables Mediating Relationship Development Interpersonal relationships are created through a variety of relationship-building factors such as intimacy, trust, and interactional enjoyment. Humans employ a host of conversational techniques to build trusting relationships (Bickmore & Picard, 2005). Embodied Conversational Agents (ECAs) are especially adept at building rapport with people (Gratch, Wang., Gerten., Edward, & Duffy, 2007). As described above, this implies a need for intimacy and trust as these are the key variables in relationship development. If the relationship becomes socially interactive then perceived enjoyment is quite influential in determining user acceptance of technology or intention to use again (Shin & Choo, 2013). 2.3.1. Intimacy with Technology Intimacy develops primarily through self-disclosure (Derlega, Metts, Petronio, & Margulis, 1993; Laurenceau et. al 1998). Reeves and Nass (1996) pioneering work with CASA explored many attributes of intimacy in the context of human–machine relationships. Numerous subsequent studies showed that regardless of technology, intimacy is a vital element for perceived usability and acceptance of the technology (Sung, Guo, Grinter, 5

Christensen, 2007). Furthermore, Venkatesh (2000) reported intimacy as an important component in his Technology Acceptance Model. 2.3.2. Trust in Technology Trust is a vital for the acceptance of technology (Madhavan & Wiegmann, 2007; Parasuraman & Wickens, 2008). Considerable researches have indicated that users need to have trust in technology before they are willing to adopt it (Ghazizadeh et al., 2012; Pavlou, 2003; Wang & Benbasat, 2005; Parasuraman & Wickens, 2008). Specifically, trust in technology develops if a device or equipment can help users achieve their objectives (Lee & See, 2004). Placing trust in technology may influence a user’s trust on a more personal level that is similar to interpersonal trust between humans (Muir,1987). This interpersonal trust is also a characteristic of human–conversational agent relationships. Conversely, a lack of trust in technology discourages users from using it to its full potential. 2.3.3. Interactional Enjoyment Perceived enjoyment is the impression a technological device is enjoyable to operate, regardless of whether it provides better functionality (Davis, Bagozzi, & Warshaw, 1992). Perceived enjoyment indirectly affects Intention to Use for task oriented systems (Venkatesh et al., 2003). However, perceived enjoyment is pivotal for systems designed for pleasure (Van der Heijden, 2004). According to Shin & Choo’s (2011), 'robots with more sophisticated social skills have greater influence on social presence, which then elevates perceived enjoyment'; this leads to a higher intention to accept and use. If conversation with an agent is human like, a user enjoys it, and is encouraged to use the technology (Heerink et al., 2008). Therefore Smart TV conversational agents should be designed for pleasure as well as utilitarian value. Thus in this study, interactional enjoyment should directly influence user satisfaction and Intention to Use. 6

3. Research model and hypotheses Based on reviews of prior HCI literature, this study puts forward a research model which proposes that self-disclosure and reciprocity can be used to generate intimacy, trust, and interactional enjoyment which will result in user satisfaction and Intention to Use.

Fig. 1 Conceptual Model

3.1. The Link between Self-disclosure and Relationship Development Researchers have found that self- disclosure is crucial factor for evoking feelings of liking (Collins & Miller, 1994). For more advanced relationships, disclosure may facilitate an emotional connection (Sprecher et. al., 2013). Hence, self-disclosure increases the likelihood of intimacy, trust, and interactional enjoyment, Therefore, the following hypotheses are proposed.

H1. Participants develop intimacy with a conversational agent if they participate in selfdisclosure.

H2. Participants develop trust in a conversational agent if they participate in self-disclosure. 7

H3. Participants experience interactional enjoyment if they participate in self-disclosure with a conversational agent.

3.2. The Link between Reciprocity and Relationship Development If an interaction repeatedly engages in bilateral self-disclosure and reciprocity, it leads to trust and attraction (Sprecher, et. al., 2013). If the recipient of disclosure does not reciprocate, it would create an uncomfortable imbalance in the relationship (Archer, 1979). Therefore, a link between reciprocity and relationship development is hypothesized through the following hypotheses.

H4. Participants develop greater intimacy with a conversational agent if a high level of reciprocity is exercised during communication.

H5. Participants develop trust in a conversational agent if a high level of reciprocity is exercised during communication.

H6. Participants experience interactional enjoyment if a high level of reciprocity is exercised during communication.

3.3. The Link between Relationship Development and User Satisfaction As discussed above, intimacy will enhance acceptance of technology and its perceived usability (Sung, Guo, Grinter, Christensen, 2007). A user’s level of trust in technology also influences intention to use (Bagheri and Jamieson, 2004; Lee and Moray, 1994). Finally, perceived enjoyment directly affects intention to use (Shin and Choo’s, 2011).

H7. Perceived intimacy has a positive effect on user satisfaction.

H8. Perceived trust has a positive effect on user satisfaction. 8

H9. Perceived interactional enjoyment has a positive effect on user satisfaction.

3.4. The Link between User Satisfaction and Intention to Use User satisfaction can be derived from satisfying feeling when using a system (Seddon & Kiew, 1994). Seddon & Kiew (1994) showed that perceived usefulness determines user satisfaction. User satisfaction influences Intention to Use (DeLone & McLean, 2003, Konradta et al., 2006). Therefore, the following hypothesis is proposed. H10. User satisfaction has a positive effect on intention to use.

4. Research methodology 4.1. Measurement Development This experiment was conducted within the context of a conversation to select a movie between an agent and a user. A laboratory experiment was devised to analyze how selfdisclosure and reciprocity affects user experience. Moreover, to test the model for the mediating effects of relational attitudes, the three constructs of intimacy, trust, and interactional enjoyment were measured. The measurement items were derived from literature review and modified for the context of interacting with computer agents (See Appendix A). Self-disclosure measurement items were composed of questions about the quantity, depth, and frankness of revealing personal information to the agent (Altman & Taylor, 1973; Collins & Miller, 1994; Jourard, 1959; Moon, 2000). The reciprocity items measured the user’s impression of the CA's response to the former’s self-disclosure (Berg & Derlega, 1987; Collins & Miller, 1994; Taylor & Hinds, 1985). Berschied et al. (1989) provided the basis for the intimacy measurement items used to measure user perception of closeness and familiarity after conversing with the agent. Trust items were composed of measures of the reliability, honesty, and trustworthiness of the agent (Wang & Benbasat, 2005, Dinev & Hart, 2006, Morgan & Hunt, 1994, Moorman et 9

al., 1993). Interactional enjoyment items measured how excited and absorbed the user was when conversing with the agent (Koufaris, 2002, Van der Heijden, 2003, 2004). User satisfaction items measured how satisfactory the user’s experience was with the conversational agent (Chin, Diehl & Norman, 1988). To gauge acceptance and potential loyalty to the service, we asked participants to measure their Intention to Use again and their inclination to recommend the system to others (Davis, Bagozzi, & Warshaw, 1992, Wang & Benbasat, 2005). Measurement items were appraised by experimental subjects using a 7point Likert scale. The measurement items were validated with a pilot-test. 4.2. Experimental Design and Manipulation Check To gauge the effect of differing degrees of self-disclosure and reciprocity, we devised a 2 × 2 factorial design with two intra-subjective factors. Mixed factors had four levels of selfdisclosure and reciprocity: high self-disclosure and high reciprocity, high self-disclosure and low reciprocity, low self-disclosure and high reciprocity, and low self-disclosure and low reciprocity. Thus, four conversation scenarios (which can be found in Appendix D) are designed to achieve the manipulations. Participants are given a choice about their preferences for a movie genre, movie or actor. The agent would respond to their preferences with an appropriate computer screen and matching spoken dialogue. Participants also interacted with the agent using speech. This is a very convincing method to simulate human computer interaction even though it is not embedded with real artificial intelligence. The 2X2 manipulations of self-disclosure and reciprocity were systematically varied in the following manner. Group A had high self-disclosure and high reciprocity. This is where the user was quite willing to disclose information; and the computer was quite willing to help and be responsive. The agent customized responses to each user and matched the user's emotional state. Conversely, in group D, the user was not willing to disclose and the computer only responded with the bare minimum information requested without displaying 10

any anthropomorphism. The computer also did not express any emotions. In group B, the user tried to revealed themselves but the agent did not show any self-disclosure or emotions. The computer only provided the information requested. In group C, the user did not participate in self-disclosure but the computer tried to be very responsive with suggestions. The user was like a couch potato and responded very passively. The agent acted like a secretary. In order to validate our treatment of the conversation scenarios, a manipulation check for the 2 × 2 design of self-disclosure and reciprocity was conducted. A total of 48 participants were used for the test. The result showed that the manipulation was successful. It was showing a significant difference between high self-disclosure treatment (M = 5.16, SD = 0.85) and low self-disclosure treatment (M = 3.33, SD = 1.22). It was strongly significant with a mean difference (t = 6.03, p =0.00). The high reciprocity and the low reciprocity treatments also showed a significant difference (t = 6.82, p < .001). Perception of the reciprocity level was higher in the high reciprocity treatment (M = 5.13, SD = 0.97) than in the low reciprocity treatment (M = 3.31, SD = 0.87).

4.3. Experimental Procedure Developing or securing a third party proprietary artificial intelligent agent to enact the combination of self-disclosure and reciprocity would be too costly and difficult to procure. It is more cost-effective to utilize the Wizard of Oz technique. Normally this would entail a human hiding in another room pretending to be the conversational agent. This negates the need to design or create an artificial intelligence. Participants do not know that they are talking to a human or a computer and the results are quite similar to chatting with a 11

conversational agent (Bradley, Benyon, Mival, and Webb, 2010). But in this study, the agent would respond to the user's choice by selecting an appropriate answer from an extensive menu of pre-recorded dialogue matching with a computer screen. The agent's image was sponsored by 3D-Guile Company. It had limited minimal nonverbal cues. It was only capable of moving lips and blinking eye which was sufficient for this study because it is focused only on verbal communication. For the main experiment, a total of 225 participants were used. Their ages ranged from 18 to 58 years. In an effort to gain a broad section of movie watchers, 55% of participants were recruited from outside Yonsei University, 45% were students, and the rest were professionals and housekeepers. More than half (51.6%) had prior experience with a mobile voice agent, Siri or S-voice. Connected TV users constituted 75% of the sample. Most of the participants (86.7%) reported liking watching movies (87%). All participants were randomly assigned a treatment of varying levels of self-disclosure and reciprocity. Each group had an average of 56 participants. In the lab, participants began by completing a consent form, after which they were instructed to converse with the agent using one of four conversation scenarios. After interacting with the conversational agent, they were asked to fill a survey. Each participant took roughly 20 minutes to complete the experiment and survey. Upon completion of the procedure, each was given US $10 as compensation.

5. Results 5.1. STUDY 1: Effects of Self-disclosure and Reciprocity on User Satisfaction A two-way ANOVA test was conducted to analyze the effects of self-disclosure and reciprocity on user satisfaction with a conversational agent. The interactional effect of selfdisclosure and reciprocity on user satisfaction was not significant, but the main effects of self-disclosure (F = 10.99, p < .01) and reciprocity were significant (F = 17.47, p < .001). 12

The results of Study 1 showed that self-disclosure and reciprocity have a strong impact on perceived satisfaction with a conversational agent. Study 2 was conducted to elaborate on the mechanism of interaction with conversational agents. A summary of the raw data collected is shown in appendix C with analysis for averages, rated by condition, construct and gender. Table 1. ANOVA test of Self-disclosure and Reciprocity Source

Type III Sum of Squares

D.F

Mean Square

F

Sig.

Corrected Model

20.437a

3

6.81

10.59

.000

Intercept

5735.78

1

5735.78

8914.92

.000

Disclosure

7.07

1

7.07

10.99

.001

Reciprocity

11.24

1

11.24

17.47

.000

.28

1

.28

.44

.510

.64

Disclosure*reciprocity Error

142.19

221

Total

6093.68

225

162.63

224

Corrected Total

5.2. STUDY 2: Structural Modeling 5.2.1. Validity Check of Measurement Model A partial least-squares (PLS) analysis is utilized to validate the research model. According to Chin, Marcolin, & Newsted (2003), PLS analysis is capable of simultaneously evaluating a measurement model and a theoretical structural model . To check the reliability and validity of the measurement model, Convergent validity and composite reliability tests were conducted. Discriminant validity (also known as. divergent 13

validity) test the degree of disagreement among constructs, whereas convergent validity test the extent of agreement of related constructs. When measuring items, the convergent validity is confirmed by using correlation coefficients. To estimate the convergent validity in this model, factor loadings and Cronbach’ s alpha reliability score are needed. Factor loadings values are considered acceptable if it is greater than 0.7. Most loadings exceeded the criterion. Cronbach’s α scores above 0.7 were considered acceptable. Table 2. The internal consistency of the measurement items Construct

AVE

CR (Composite Reliability)

Cronbach’s α

Self-Disclosure

0.600

0.874

0.753

Reciprocity

0.573

0.903

0.856

Intimacy

0.725

0.929

0.902

Trust Interactional Enjoyment User Satisfaction

0.602

0.900

0.855

0.678

0.926

0.905

0.671

0.911

0.871

Intention to Use

0.709

0.936

0.918

Table 3. Convergent Validity & Standardized Path Loadings Reciprocity Reciprocity Disclosure

SelfDisclosure

Intention to Use

1

1

Intention 1

0.766

Intention 2

0.853

Intention 3

0.867

Intention 4

0.858

Intention 5

0.844

14

Interactional enjoyment

Intimacy

Trust

User Satisfaction

Intention 6

0.859

Enjoy 1

0.685

Enjoy 2

0.852

Enjoy 3

0.805

Enjoy 4

0.87

Enjoy 5

0.861

Enjoy 6

0.848

Intimacy 1

0.786

Intimacy 2

0.888

Intimacy 3

0.906

Intimacy 4

0.877

Intimacy 5

0.792

Trust 1

0.648

Trust 2

0.751

Trust 3

0.811

Trust 4

0.769

Trust 5

0.838

Trust 6

0.826

Satisfact 1

0.77

Satisfact 2

0.832

Satisfact 3

0.851

Satisfact 4

0.819

Satisfact 5

0.821

Discriminant validity means that the degree of measurements should be empirically distinct. That means a construct should be distinct and unique from other constructs. To confirm discriminant validity the correlation coefficients and the square-root of average variance extracted (AVE) are estimated. The result is shown in Table 4. The square-root of AVE scores was higher than the correlations between the constructs; discriminant validity is confirmed (Barclay et al., 1995). Table 4. Discriminant Validity Intention

Interactional

Intimacy

15

Reciprocity

Self

Trust

User

to Use Intention to use

Enjoyment

Disclosure

satisfaction

(0.842)

Interactional enjoyment

0.826

(0.823)

Intimacy

0.579

0.693

(0.851)

0.39

0.359

0.351

Self Disclosure

0.225

0.256

0.178

0.06

(1)

Trust

0.725

0.725

0.578

0.294

0.265

(0.777)

User Satisfaction

0.839

0.741

0.517

0.278

0.229

0.755

Reciprocity

(1)

(0.819)

5.2.2. Hypothesis Tests and Results After checking the measurement model, the proposed structured model was tested. Fig2. PLS analysis of the research model.

Table5. Hypothesis tests H

Path

Path Coefficient

t-Value

Result

H1

Self-disclosure  Intimacy

0.118

1.820

Not

H2

Self-disclosure  Trust

0.282

4.180

Significant

H3

Self-disclosure Interactional Enjoyment

0.281

4.154

Significant

H4

Reciprocity  Intimacy

0.673

12.493

Significant

16

H5

Reciprocity  Trust

0.422

6.128

Significant

H6

Reciprocity  Interactional Enjoyment

0.515

8.317

Significant

H7

Intimacy  User Satisfaction

-0.069

0.914

Not

H8

Trust  User Satisfaction

0.485

6.865

Significant

H9

InteractionalEnjoymentUser

0.434

5.792

Significant

0.839

31.262

Significant

Satisfaction H 10

User Satisfaction  Intention to Use

Figure 2 shows the final results are indicating the strength of the relationship between the constructs with estimates of path coefficients. The R-squared value represents the variance explained by the coefficient of determination. Together, the R-squared and path coefficients in PLS demonstrate the results of the research model. As shown in table 5, all hypotheses were supported except for two paths: self-disclosure to intimacy (H1) and intimacy to user satisfaction (H7). Reciprocity showed stronger effects on trust and interactional enjoyment than self-disclosure. Trust and interactional enjoyment were strong indicators of user satisfaction, but the degrees of the effect were almost identical. Intention to Use of the conversational agent was strongly determined by user satisfaction.

6. Discussion A two-way ANOVA result showed that when the agent engaged in reciprocity and self-disclosure, users rated the experience as more satisfying. The results of PLS showed that perceived trust and interactional enjoyment were identified as significant mediators between style of communication and user satisfaction. However, the paths from self-disclosure to intimacy and from intimacy to user satisfaction are not supported. This means that reciprocity is a stronger variable than self-disclosure in predicting relationship building between an agent and a user. 17

Reciprocity is a very important element when designing conversational agent recommendation systems (Kahn, Freier, Friedman, Severson, Feldman, 2004). A quick response from an agent in the form of suggestions or question will build attraction with agent and enhance user experience (Hoffman, Birnbaum, Reis, Vanunu, & Sass, 2014). Reciprocity is a more significant determinant of user satisfaction than self-disclosure. This is because smart TV users have a lean back tendency. They would prefer to relax and get recommendations than to think for them-selves. The lean back style for interacting with technological devices occurs at a leisurely pace where users passively receive information (Jones, Jain, Buchannan & Marsden, 2003). Furthermore, trust and enjoyment were stronger determinants of user satisfaction than intimacy. This implies that perception of intimacy with the recommendation agent cannot be developed with a one-time conversation. Repeated interactions are necessary for relationship building, and intimacy might be an outcome of accumulated encounters while trust and enjoyment could be perceived in a shorter period, or even in the first encounter. User satisfaction is a very strong determinant of intention to use. That is, a high degree of user satisfaction generated by trust and enjoyment will encourage usage of the agent recommendation system. Intention to use a recommendation system is critical for the success of many online services. For example, Amazon and Netflix maintain a stable leading market position by building a high level of trust and user satisfaction. User satisfaction can be enhanced by using a visual avatar because an agent with a face can generate a higher level of social presence, emotional involvement and interactional enjoyment (Morie, Chance, Haynes & Rajppurchit, 2012). Humans perceive conversational agent as social actors. Thus CA should be designed to be social as well as functional (Weiss & Tscheligi, 2012). So I believe that future conversational agent will require qualities such as sociality, problem solving, reciprocity, efficient 18

fulfillment of tasks, to collaborate with humans. To provide a pleasant satisfying user experience, an agent should be polite and friendly, be able to communicate effectively with users, listen actively to a user's problems, provide assistance and help as much as possible and be skilled in making self-disclosure like a companion would. Sociality and reciprocity are the essential features of a good conversational agent. A recommendation system should collect a user’s preferences, interests, likes, and dislikes to make customized and curated recommendation. This information could also be more efficiently collected by engaging in self-disclosure and reciprocity with users. For example, Jinni.com, the best movie recommendation system according to Cnet.com, queries users’ opinions, preferences and even viewers’ emotional disposition. In comparison, Netflix’s recommendations system does not query user preferences and is primarily based on their viewing history, but had a much lower approval rating. As deep learning systems show, accumulated personal information can be used to generate more correct and personalized interactions. Having an agent that can reciprocate in interaction will make it more believable and realistic. In designing believable agents, it is important that the agent possesses the essential properties needed to interact with others as in real life (Mark & Becker, 1999).

7. Conclusion The main contributions of this study can be viewed from both the theoretical and the practical aspects. In the IOT era, conversational agents are becoming more and more popular. It is being deployed in smart phones, smart TVs, smart cars, smart homes, smart computer systems and even in shopping malls to provide customized user service and information to users. Artificial intelligent systems can be potentially a friend, secretary or even a lover. Human to conversational agent interaction will increasingly resemble human to human relationship. Thus agents will need human communication qualities. Accordingly, designers should aim to create agents capable of generating feelings of intimacy, trust, attraction, liking 19

and interactional enjoyment. In designing believable agents, it is important that the agent possesses the essential properties needed to interact with others as in real life. This study has the following limitations. The results could have been different if the experiment had been conducted outside Korea, or if the sample had been from another demographic. Ongoing self-disclosure can build relationships between human and computers. However in this experiment, the topic of discussion—selecting and recommending movies—is not really intimate self-disclosure. This study showed that using self-disclosure and reciprocity will lead to intention to use, but a longer term study is needed to determine if utilizing these strategies will lead to continuous usage. This experiment used a 2D graphical conversational agent. But we think that using a real artificial 3D conversation agent, or using Virtual Reality, might generate higher social presence. Directions for future research, it is also worth investigating if different personality types may affect the results. Furthermore, a conversational agent may play the role of a friend, lover, secretary etc. which may also affect the results. It would also be interesting to examine the effect of CA's non-verbal cues combined with verbal cues in HCI studies.

20

References Altman, I., Taylor, D.A. (1973). Social penetration: The development of interpersonal relationships. Oxford, U.K: Holt, Rinehart & Winston. Archer, R. L. (1979). Role of personality and the social situation, In:. Chelune, G.J (Ed.), self-disclosure: origins, patterns, and implications of openness in interpersonal relationships. (pp. 28-58). San Francisco, CA: Jossey-Bass. Axelrod, L., Hone, K. (2005). E-motional advantage: Performance and satisfaction gains with affective computing. In: CHI ’05 extended abstracts on human factors in computing systems. Portland, OR, USA: ACM Press. Bagheri, N., Jamieson, G.A. (2004). Considering subjective trust and monitoring behavior in assessing automation-induced “complacency, in: Vicenzi, D.A., Mouloua, M., Hancock , P.A. (Eds.), Human performance, situation awareness, and automation, HPSAA II . Mahwah, NJ: Erlbaum. pp. 54–59 Barclay, D., Thompson, R., Higgins, C. (1995). The partial least squares (PLS) approach to causal modeling: Personal Computer Adoption and Use an Illustration, Technology Studies 2(2), 285-309. Barrett, J. (2013) Mobile Research shows voice control technologies resonating with Apple and Google users [Media Release]. http://www.parksassociates.com/blog/article/prjun2013-mobilevoice (accessed 09-01-17) Berg, J.H., Derlega, V.J. (1987). Themes in the study of self-disclosure. In: Derlega, V.J., Berg , J H. (Eds.), Self-disclosure: theory, research and therapy New York: Plenum, pp.1-8. Berger, C.R., Calabrese, R. (1975). Some explorations in initial interaction and beyond: toward a development theory of interpersonal communication. Human Communication Research, 1 99-112. doi: 10.1111/j.1468-2958.1975.tb00258.x Berscheid, E., Snyder, M., Omoto, A.M. (1989). The relationship closeness inventory: assessing the closeness of interpersonal relationships. Journal of Personality and Social Psychology, 57, 792–807. doi:10.1037/0022-3514.57.5.792 Bickmore T.W., Picard, R.T. (2005). Establishing and maintaining long-term humancomputer relationships. ACM Transactions on Computer-Human Interaction, 12(2), 293-327. doi: 10.1145/1067860.1067867 Bradley, J., Benyon, D., Mival, O., Webb, N. (2010). Wizard of Oz experiments and companion dialogues. In: Proceedings of the 24th BCS interaction specialist group conference, Dundee, United Kingdom, pp.117-123

21

Chin, J.P., Diehl, V.A., Norman, K.L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. in: Proceeding CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM New York, NY pp. 213-218 doi:10.1145/57167.57203 Chin, W.W., Marcolin, B., Newsted., P. (2003). A partial least squares latent variable modeling approach for measuring interaction effects: results from a monte carlo simulation study and an electronic-mail emotion/adoption study. Information Systems Research 14(2) 189-217. DOI:10.1080/10705510903439003 Collins, N.L., Miller, L.C. (1994). Self-disclosure and liking: A meta-analytic review. Psychological Bulletin, 116(3), 457-475. doi:10.1037/0033-2909.116.3.457 Davis, F.D. Bagozzi, R.P., Warshaw, P.R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace, Journal of Applied Social Psychology, 22, 1111-1132. DeLone, W.H., McLean, E.R., (2003). The Delone and Mclean model of information systems success: a ten-year update. Journal of Management Information Systems 19(4), 9-30, doi:10.1080/07421222.2003.11045748 Derlega, V.J., Metts, S., Petronio, S., Margulis, S.T. (1993). Self-disclosure. Newbury Park, CA: Sage. Dinev, T., Hart, P. (2006). Privacy concerns and levels of information exchange: an empirical investigation of intended e-services use, e-Service Journal 4(3), 25-59 Fraser, N. G. Gilbert, (1991). Simulating speech systems, Computer Speech and Language, 5:81–99. Ghazizadeh, M., Lee, J.D., Boyle L.N. (2012). Extending the technology acceptance model to assess automation, Cognition, Technology. & Work, 14(1) 39-49 Gouldner, A.W. (1960). The norm of reciprocity: a preliminary statement. American Sociological Review, 25(161–178). http://www.jstor.org/stable/2092623 Gratch, J., Wang, N., Gerten, J., Edward, F., Duffy, R. (2007). Creating rapport with virtual agents. In: Pelachaud, C., Martin, J.C., André, E. Chollet, Karpouzis, G.K., Pelé, D. Intelligent Virtual Agents, Springer, California, USA, doi: 10.1007/978-3-54074997-4_12 Heerink, N, Kröse, B.J.A.,Wielinga, B.J., Evers, V. (2008). Enjoyment, continuous usage and actual use of a conversational robot by elderly people, in: Proceedings of the third ACM/IEEE,. International Conference on Human-Robot Interaction, Amsterdam, .pp.113-120. Herbsleb, J. (1999). Metaphorical representation in collaborative software engineering. ACM SIGSOFT Software Engineering Notes, 24(2), 117-126 22

Jourard, S. M. (1959). Self-disclosure and other-cathexis. Journal of Abnormal and Social Psychology, 59, 428-431. doi: http://dx.doi.org/10.1037/h0041640 Koufaris, M. (2002). Applying the technology acceptance model and flow theory to online consumer behavior, Information Systems Research, 13(2). 205-223 Laurenceau, J., Barrett, L.F, Rovine, M.J. (2005). The interpersonal process model of intimacy in marriage: A daily-diary and multilevel modeling approach. Journal of Family Psychology.19, 314–323. DOI: 10.1037/0893-3200.19.2.314. Lee, J.D., See, K.A. (2004). Trust in automation: designing for appropriate reliance human factors, The Journal of Human Factors and Ergonomic Society, 46, 50-80 Lee, M.K.O., Turban, E., (2001).A trust model for consumer internet shopping. International Journal of Electronic Commerce, 6 (1), 75-91. DOI: 10.1080/10864415.2001.11044227 Lee, J.D., Moray, N., (1994). Trust, self-confidence, and operator's adaptation to automation. International Journal of Human-Computer Studies. 40(1). 153–184. doi:10.1006/ijhc.1994.1007 Mahlke, S., Minge, M., Thuring, M. (2006) Measuring multiple components of emotions in interactive contexts. In: CHI ’06 extended abstracts on Human factors in computing systems, ACM Press, Montreal, Quebec, Canada Mark, G., Becker, B. (1999) Constructing social systems through computer-mediated communication, Virtual Reality 4(60). doi:10.1007/BF01434995

Montague, E. (2010). Validation of a trust in medical technology instrument. Applied Ergonomics, 41(6), 812-821. Moon, Y.M. (2000). Intimate exchanges: using computers to elicit self-disclosure from consumers. Journal of Consumer Research, 26(4), 323-39. doi: 10.1086/209566 Moorman, C., Zaltman, G., Deshpandé. R. (1992). Relationships between providers and users of market research: The dynamics of trust within and between organizations. Journal of Marketing Research 29(3), 314–329. DOI: 10.2307/3172742 Morgan. R.M., Hunt, S.D. (1994). The commitment-trust theory of relationship marketing, Journal of Marketing 58(3). 20-38 DOI: 10.2307/1252308 Morie, J.F., Chance, E., Haynes, K., Rajpurohit, D. (2012). embodied conversational agent avatars in virtual worlds: making today’s immersive environments more responsive to participants, In: Hingston, P., (Ed), In Believable Bots: Can Computers Play Like People?, Springer, Berlin, pp.99-118

23

Muir, B.M. (1987). Trust between humans and machines, and the design of decision aids. International. Journal of Man Machine Studies, 27, 527-539. doi: 10.1016/S0020-7373(87)80013-5 Nass C., Brave, S, (2007). Wired for Speech. How voice activates and advances the human computer relationship. Cambridge: The MIT Press. Nass C., Moon, Y. (2000).Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. doi: 10.1111/0022-4537.00153 Parasuraman, R., Wickens, C.D. (2008). Humans: still vital after all these years of automation. Human Factors: Journal of Human Factors Ergonomic Society. 50, 511-520. Pavlou, P.A. (2003). Consumer acceptance of electronic commerce: integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce 7(3), 101-134. DOI:10.1080/10864415.2003.11044275 Pearce, W.B. & Sharp, S.M. (1973). Self-disclosing communication. Journal of Communication. 23(4). 409-425. doi: 10.1111/j.1460-2466.1973.tb00958.x Reeves, B., Nass C. (1996). The media equation: how people treat computers, television, and new media like real people and places. New York. Cambridge University Press Satzger,B., Endres, M., Kiebling, W. (2006) A preference-based recommender system. in: Bauknecht, I., Pröll, B., Werthner, H. (Eds), E-Commerce and Web Technologies, Berlin, Heidelberg Springer-Valag. pp.31-40, doi : 10.1007/11823865_4 Seddon, P. B., Kiew, M.Y. (1994). A partial test and development of the Delone and Mclean model of is success, in: Proceedings of the 15th International Conference on Information Systems, DeGross, J.I., Huff, S.L., Munroe, M.C. (eds), Vancouver, Canada, pp. 99-110 Shin, D.H., Choo, H.S. (2011). Modeling the acceptance of socially interactive robotics: Social presence in human–robot interaction. Interaction Studies Social Behaviour and Communication in Biological and Artificial Systems 12 (3), 430-460. doi: 10.1075/is.12.3.04shi Sprecher, S., Treger, S., Wondra, J. D, Hilaire, N., & Wallpe, K. (2013). Taking turns: Reciprocal self-disclosure promotes liking in initial interactions. Journal of Experimental Social Psychology 49(5) 860–866 doi:10.1016/j.jesp.2013.03.017. Sung, J.Y., Guo, L., Grinter, R.E. & Christensen, H.I., (2007). ’My roomba is rambo’: intimate home appliances, in: 9th Intl Conf on Ubiquitous Computing, 2007, pp.145162 Taylor, D.A. Hinds, M. (1985). Disclosure reciprocity and liking as a function of gender and personalism, Sex Roles, 12(11-12) 1153-1157 doi:10.1007/BF00287824 24

Van der Heijden, H. (2003). Factors influencing the usage of websites: the case of a generic portal in the Netherlands. Information & Management, 40(6), 541-549 Van der Heijden, H. (2004). User acceptance of hedonic information systems, MIS quarterly, 28, 695-704. Venkatesh, V. (2000). Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model, Information. Systems Research 11(4), 342–365 Venkatesh, V., Morris, M.G. Davis, G.B., Davis, F.D. (2003). User acceptance of information technology: toward a unified view, MIS Quarterly, 27 425-478. Wang, W., Benbasat, I. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information System. 6, 72-101. Weiss, A., Tscheligi, M., Rethinking the human–agent relationship: which social cues do interactive agents really need to have? In: Hingston. P., (Ed). Believable Bots: Can Computers Play Like People? Berlin: Springer Verlag, 2012. pp. 1-28,. Wess,B., Wechsyng, I., Kuhnel C., Moller, S. (2015), Evaluating embodied conversational agents in multimodal interfaces, Computational Cognitive Science, 1(6), doi: 10.1186/s40469-015-0006-9 Zhang, P., Li, N. (2005). The importance of affective quality. Communication of the ACM 48(9), 105–108

25

Appendix A. Measurement Items of Research Constructs

Construct

Self Disclosure

Reciprocity

Intimacy

Trust

Survey Items 1. The CA and I exchanged enough personal information. 2. I expressed my movie preferences when the CA asked me about my taste in movies. 3. I can speak to CA frankly and candidly. 4. The amount of self-disclosure information, i.e., number of words, sentence length is enough. 5. My thoughts and feelings were conveyed frankly to the CA. 6. My self- disclosures contained information and facts. 7. I used many intimate words in my self- disclosure. 8. I can talk about anything with the CA, even my secrets. 1. The CA gave good responses to your questions. 2. I felt that the CA was like my companion or friend. 3. The CA was helpful when you asked for information. 4. I think the CA and I were able to help each other. 5. I think the CA and I exchanged opinions as though we were equal in our social status. 6. I felt solidarity with the CA after our conversation. 7. I think the CA will support me emotionally. 1. I feel close to the CA. 2. I feel that the CA is my close friend. 3. I feel emotionally close to the CA. 4. I think the CA will affect my selection of media contents. 5. The CA uses supportive statements to build favor with me. 6. I developed a sense of familiarity with the CA. 1. I have faith in what the CA is telling me. 2. The CA provides with me unbiased and accurate movie recommendations. 3. The CA is honest. 4. The CA is trustworthy 5. The CA wants to know and understand my needs and preferences. 6. The CA wants to remember my interests. 7. I believe that the CA provides a reliable service. 8. I can trust the CA with my personal information. 9. I can trust the information provided by the CA. Items 5 and 6 were deleted from the factor analysis.

26

Reference

Altman and Taylor (1973); Jourard (1959); Moon (2000); Collins and Miller (1994);

Berg and Derlega (1987) Collins & Miller (1994) Taylor & Hinds (1985)

Berschied, et al (1989)

Wang and Benbasat (2005); Dinev and Hart (2006); Morgan and Hunt (1994); Moorman et al. (1993)

1. It is fun and enjoyable to share a conversation with the CA. 2. I am so absorbed in the conversation with the CA. Interactional 4. The conversation with the CA is exciting. Enjoyment 5. I enjoy watching movies more if it was recommended by the CA than when I choose it myself. 6. Services provided by the CA are more entertaining and attractive than without a CA. 1. I was satisfied with the experience of using a dialogue with the CA to complete tasks. 2. I am satisfied with the CA’s recommendation service. 3. Interacting with the CA was a pleasant and satisfactory experience. 4. The dialogue with the CA gave me useful information. User 5. I am satisfied with asking the agent for information Satisfaction because it is easier than trying to find it myself. 6. I feel that the CA is an expert. 7. The CA’s responses in the interaction were appropriate. 8. The overall assessment of conversing with the CA was satisfactory. 1. I will use the CA system again. 2. I would recommend the CA system to others. 3. If this CA system is commercially available I would purchase it. 4. I would like the CA system to assist me in making Intention to decisions. Use 5. I am satisfied with using the CA because it is easy to use and better than having to do it myself. 6. When using smart media, I will watch the content recommended by the CA. 7. If I had to a chance to use this CA again, I want to talk a lot.

27

Koufaris (2002); Van der Heijden (2003, 2004)

Chin, Diehl, and Norman (1988).

Davis, Bagozzi, and Warshaw (1992); Wang and Benbasat (2005)

Appendix B. Descriptive statistics of the respondents' characteristics (N=225) Variable Gender Age

Occupation

Education

Siri experience Smart TV experience movie preferences Watching movies

Characteristics

Category Male Female 10 ~19 20 ~29 30 ~39 Over 40 Employee Professional Self-employed Student Housewife Etc Middle& high School Student High school graduate Undergraduate Student Bachelor's degree Graduate student Master degree Ph. D & Doctorate YES NO YES

Frequency 82 143 3 103 69 50 23 50 12 101 22 17

Percentage (%) 36.4% 63.6% 1.3% 45.8% 30.7% 22.2% 10.2% 22.2% 5.3% 44.9% 9.8% 7.6%

3

1.3%

29

12.9%

95

42.2%

35 8 9 46 116 109 169

15.6% 3.6% 4.0% 20.4% 51.6% 48.4% 75.1%

NO

56

24.9%

YES NO 1 per week 1 per month 1-5 per year Extraverted Adaptability Openness Neurotic Introverted

195 30 39 143 43 86 55 44 6 34

28

86.7% 13.3% 17.3% 63.6% 19.1% 38.2% 24.4% 19.6% 2.7% 15.1%

Appendix C. Descriptive Statistics: Average by condition and construct (7 Likert Scale) Group

Gender

Variable

Selfdisclosure

Reciprocity

Intimacy

Trust

Interactional Enjoyment

User satisfaction

Intention to Use

mean

4.875

5.458

4.675

5.432

5.125

5.663

5.656

SD

0.559

0.674

1.173

0.688

0.737

0.601

0.522

mean

4.95

5.216

4.796

5.384

5.324

5.5

5.633

SD

0.602

0.755

0.912

0.541

0.852

0.717

0.797

mean

4.412

3.962

3.485

4.933

4.333

5.008

4.667

SD

0.684

0.693

1.132

0.541

1.11

0.84

1.152

mean

4.394

4.063

3.786

4.931

4.224

5.021

4.828

SD

0.749

0.801

1.018

0.414

0.948

0.622

0.905

mean

3.707

4.754

4.011

4.86

4.325

4.926

4.842

SD

0.755

0.528

1.084

0.748

1.23

0.895

0.955

mean

3.949

4.876

4.084

4.954

4.618

5.219

5.258

SD

0.638

0.543

0.989

0.544

1.226

1.115

1.118

mean

3.844

3.714

3.486

4.746

SD

0.983

0.986

1.172

0.455

0.903

0.794

0.806

mean

4.034

4.029

3.814

4.638

4.264

4.759

4.483

SD

0.864

0.857

1.192

0.414

0.776

0.739

0.887

Male A Female

Male B Female

Male C Female

Male D

3.857

4.686

4.46

Female

29

Appendix D. Combinations of Self Disclosure and Reciprocity Scenarios A: Self-Disclosure is High & Reciprocity is High User: I’m bored. Can you recommend some movies? Smart Media: Yes, I’ll make some recommendations. What kind of movie genre do you like? Romantic Comedy? Action? Thriller? User: I'd like to see a romantic comedy. I personally do not like horror movies. Smart Media: I like romantic comedy too. It is such a lighthearted and fun genre. Do you mind if I ask who your favorite movie star is? User: My favorite actors are Will Smith, Brad Pitt, Audrey Hepburn and Meg Ryan. Smart Media: I will remember this. Your favourite movie actors are really wonderful User: Thank you so much if you say so. Can you recommend some films with these movie stars? Smart Media: Yes, I can show you a list of films featuring these movie stars. User: Looking on this list, I think I want to watch an Audrey Hepburn movie. Audrey Hepburn has a really attractive face. Don’t you think so? Smart Media: Yes I agree. She is a really attractive actress who dominated her era. She has her own Hepburn style. How about watching 'Roman Holiday' or 'Breakfast at Tiffany's? User: What are these movies about?

30

Smart Media: "Roman Holiday" is about a princess (Audrey Hepburn) who escapes her royal life with the help of gentleman (Gregory Peck) to enjoy Rome. 'Breakfast at Tiffany's is set against the background of New York City in the early 1940s and is about a poor girl yearning to be in high society and dreams of wealth and affluence. User: Which do you recommend between the two? Smart Media: "Breakfast at Tiffany ' It won an Academy Award and has the famous theme song "Moon River”. Because of this movie, Audrey Hepburn became famous. If you want to, I will show you a 1 minute highlight video clip first? User: Ok, show me. B: Self-Disclosure is High & No reciprocity User: I’m bored. Can you recommend some movies? Smart Media: Yes, I can make some recommendations. What kind of movie genre do you like? Romantic comedy? Action? Thriller? User: I want to see a romantic comedy. My favourite actors are George Clooney, Audrey Hepburn, Meg Ryan, and Will Smith. But I’m in the mood to see an Audrey Hepburn movie. Smart Media: Here are some movies starring Audrey Hepburn. User: What do you think of Audrey Hepburn? Smart media: I don’t have any personal opinion about her. User: Audrey Hepburn is really beautiful. Don’t you think so? Smart Media: In my thinking, my opinion is not important. 31

User: Audrey Hepburn is an icon of that era. She is a wannabe woman that every girl wants to emulate. Her fashion and beauty is timeless. Even I try to copy the Hepburn look with my makeup, dress and style. Smart Media: What sort of Audrey Hepburn movie would you like to watch? User: What kind of Audrey Hepburn movie do you recommend? Smart Media: Okay, I will show you a list of her movies. Simply select one. User: This movie called “Roman Holiday” with a picture of Audrey Hepburn eating ice cream looks impressive. I want to watch it. Smart Media: Then I'll show you now. User: Ok, show me. C: No self-disclosure & Reciprocity is high User: I’m bored. Can you recommend some movies? Smart Media: Yes, I can make some recommendations. What kind of movie genre do you like? Romantic comedy? Action? Thriller? User: No. I don’t have any preferences. Smart Media: Do you have any favourite movie stars? User: I do not have any ideas. Can you recommend any good movies? Smart Media: I'll show you a movie list and explain them one by one. The first film called “Her” is a about an artificial intelligence operating system' Samantha '(Scarlett Johansson).

32

Theodore '(Joaquin Phoenix) was a lonely and empty man until he meets Samantha. Samantha would always be there to listen and talk to him. This movie describes how humans and computers may interact with one another. My other recommendations are two Movies starring Audrey Hepburn, "Roman Holiday" and "Breakfast at Tiffany'.” Breakfast at Tiffany is about a woman who strives to be in high society. It was filmed in 1961 but was re-mastered in 2012. “Roman Holiday” is about a princess taking a holiday in Rome and co-stars Gregory Peck. User: Which one do you recommend? Smart media: My recommendations are all good movies but I don’t really know what sort of movies you like. May I suggest you watch a classic romantic comedy? User: You mean Audrey Hepburn movie? Smart Media: Yeah, Audrey Hepburn was a charming actress who dominated her era. She gave rise to the Hepburn style. User: Which Hepburn movie do you recommend then? Smart Media: How about "Breakfast at Tiffany”? It won an Academy Award, and has the famous theme song "Moon River”. It made Audrey Hepburn into a famous movie star. Let me show you a highlight video. User: Ok, show me then. D: No Reciprocity & No Self-Disclosure User: I’m bored. Can you recommend some movies?

33

Smart Media: Yes, I can make some recommendations. What kind of movie genre do you like? User: I don’t have any preferences. Can you recommend some? Smart Media: I will show a list of romantic comedies. Just choose one from the list. User: I can see the list. Can you tell me about “Roman Holiday” or 'Breakfast at Tiffany's "? Smart Media: ‘Roman Holiday’ is about a Princess Anne (Audrey Hepburn) who is fed up with the constraints of royal life and consequently runs off with a gentleman to roam freely around Rome and experience a normal person’s life. 'Breakfast at Tiffany's is set in the background of New York in the early 1940s and tells the story of a poor girl named Holly who dreams of being in high society with wealth and affluence. User: Ok, can you show me a highlight video clip? Smart Media: Yeah, I'll show highlight video of each movie. User: Oh, after the video, I would like to watch the movie” Roman Holiday”. What do you think? Smart Media: I cannot tell you my opinion. User: Audrey Hepburn is beautiful and gay. What do you think? Smart Media: I do not have any opinions. She gave rise to the Hepburn style that dominated her era. User: Yes, I want to check her fashion style, please, play the movie. 34

Smart Media: Let me show you now.

"The CA with a face image in this experiment was provided by GUILE 3D company.”

Highlights    

CASA and Uncertainty reduction theory is applicable for human computer interaction. Agents should be designed to generate feelings of intimacy, trust and interactional enjoyment. Self- disclosure and reciprocity are essential for user satisfaction and intention to use when communicating with agent. Findings of this paper have implications for research as well as practical applications for advanced conversational agents.

35