Pre-sale vs. Post-sale e-satisfaction: Impact on repurchase intention and overall satisfaction

Pre-sale vs. Post-sale e-satisfaction: Impact on repurchase intention and overall satisfaction

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION THORSTEN POSSELT AND EITAN GERSTNER E -tailers delive...

156KB Sizes 1 Downloads 55 Views

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION THORSTEN POSSELT AND EITAN GERSTNER

E

-tailers deliver services in two phases: before the sale takes place, and

THORSTEN POSSELT is Professor of Retailing and Service

after the sale is over. Previous research in behavioral science has suggested

Management, University of

that the time sequence of service delivery may affect customer evaluation of

Wuppertal, Germany; e-mail:

service and therefore it may also affect e-satisfaction.To determine how much

[email protected]

to invest in pre-sale services relative to post-sale services, e-tailers should examine the impacts of customer satisfaction with services delivered in each phase on repurchase intention and overall service rating. In this paper, we

EITAN GERSTNER is Professor of Marketing,

measure these impacts and find that post-sale service has a much stronger

University of California, Davis;

impact on customer repurchase intention and overall service ratings com-

e-mail: [email protected]

pared to service delivered pre-sale. Because of this recency effect (buyers give more weight to e-service they receive late than to e-service received earlier), The authors are grateful to Gerd

e-tailers are advised to put a strong emphasis on post-sale service.

Ronning, Dubravko Radic, and Pablo Berger for helpful comments and

© 2005 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.

advice and to Bill Farnham, Michal

JOURNAL OF INTERACTIVE MARKETING VOLUME 19 / NUMBER 4 / AUTUMN 2005

Gerstner, and Lupe Sanchez for help

Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/dir.20048

in editing.

35

INTRODUCTION Customer satisfaction measurements are extensively used to monitor and improve business performance (see Szymanski & Henard, 2001, for a Meta-Analysis of the empirical evidence). Many companies believe that improved customer satisfaction will lead to improved customer loyalty, which will eventually improve profit. Researchers and consultants have suggested different methods for measuring customer satisfaction and estimating its impact on customer loyalty and profit (Evanschitzky, Gopalkrishnan, Hesse, & Ahlert, 2004; Fornell, 1992; Loverman, 1998; Oliver, 1977, 1993; Parasuraman, Zeithaml, & Berry, 1994, 1988; Reichheld, 2003; Szymanski & Hise, 2000; Winer, 2001; Yi, 1990). Customer satisfaction can be measured on a transaction (service encounter) level or as a global assessment of cumulative service encounters. One approach for measuring customer satisfaction with a service encounter is to ask customers to compare the service they received with a pre-consumption standard or expectation (Oliver, 1977, 1993; Yi, 1990). An example of a customer satisfaction measure with service encounters that is based on this confirmation/ disconfirmation approach is the SERVQUAL index. Parasuraman, Zeithaml, & Berry (1994, 1988) identified five important dimensions that affect customers’ evaluations of the quality of a service provider: reliability, assurance, responsiveness, empathy, and tangibles. An example of a global customer satisfaction measure that aggregates service encounters across service providers within industries is the index of overall customer satisfaction developed by Fornell (1992). The growing popularity of e-commerce created the need for studies specifically aimed at exploring important dimensions that affect customer satisfaction (e-satisfaction) with services offered by e-tailers, and customer loyalty to e-tailers. The development of the Internet helped e-tailers automate many services, thus replacing person-to-person interactions with person-to-computer interactions. Academic research has identified five major dimensions that customers use to evaluate Web sites: (1) information availability and content, (2) ease of use, (3) privacy and security, (4) graphics and style, and (5) fulfillment (Zeithaml, Parasuraman, & Malhotra, 2002). Focusing on

36

JOURNAL OF INTERACTIVE MARKETING

e-tailers, Szymanski and Hise (2000) used online survey research and found that convenience, site design, and financial security dominate consumer assessment of e-satisfaction (this study was replicated successfully by Evanschitzky, Gopalkrishnan, Hesse, and Ahlert (2004), for e-tailers in Germany). Similar results were obtained by Burke (2002). Srinivasan, Anderson, and Ponnavolu (2002) investigated the antecedents and consequences of customer loyalty in an online business-to-consumer context. They identified eight factors that potentially impact e-loyalty (customization, contact interactivity, cultivation, care, community, choice, convenience, and character), and proposed methods for measuring these factors. In a recent study, Wolfinbarger and Gilly (2003) identified four factors that can strongly predict customer judgment of e-tail service quality: Web design, fulfillment/reliability, privacy/security, and customer service. Applying factor analysis and LISREL, they found that fulfillment/ reliability (i.e., accurate product display and description, on-time delivery, delivery of the right product) is the largest and the most consistent predictor of e-tail quality. The authors presented a scale for the measurement of e-tail quality. Previous research in behavioral science has suggested that the time sequence of service delivery may affect customer evaluation of service and therefore it may also affect e-satisfaction. Specifically, while evaluating overall satisfaction with a service encounter, customers may assign a higher importance weight to post-sale service encounters than to pre-sale service encounters because of a “service-ending effect” or a “recency effect.” (See the discussion below). Building on this theory, this paper explores how satisfaction with pre-sale service activities and satisfaction with post-sale service activities influence overall customer satisfaction and intention to repurchase from e-tailers. Such an investigation is important because it can guide e-tailers who make decisions on how to allocate investments between pre-sale and post-sale activities. The plan of the paper is as follows: First, we briefly discuss the theories that motivated our research, and derive two testable hypotheses that are based on these theories. Second, we discuss the data and method used. Third, we present and discuss the analysis, the results, and their implications. Finally, we offer a discussion on the limitations of this research and on directions for future research.

PRE-SALE vs. POST-SALE CUSTOMER SATISFACTION In this section, we discuss two streams of research that lead to two hypotheses concerning the impact of pre-sale and post-sale satisfaction on overall satisfaction with a service experience provided by an e-tailer.

Customer Satisfaction and Service Ending Theories in behavioral sciences suggest that people remember only certain events in service experiences. There is evidence that the overall assessment of a service is strongly influenced by the ending of the experience (Ariely & Carmon, 2000; Chase & Dasu, 2001; Kahneman, Fredrickson, Schreiber, & Redelmeier, 1993; Redelmeier & Kahneman, 1996). These studies may also imply that post-purchase satisfaction has a stronger influence than pre-purchase satisfaction on the overall assessment of a service provided by an e-tailer.

Customer Satisfaction and Order Effects The theory on order effects suggests that when subjects make repeated judgments concerning information they acquire over time, their judgment may be influenced by the timing in which the information is acquired (Anderson, 1981; Asch, 1946; Haugtvedt & Wegener, 1994; Lund, 1925). A primacy effect exists when information seen early is given a greater weight (“first impressions go the longest”). A recency effect exists when information seen late by a subject is given a greater weight than information seen earlier (“you’re only as good as your last performance”). Order effects have been studied in a variety of business topics including auditing (e.g., Anderson & Maletta, 1999; Asare, 1992), decision making (e.g., Johnson, 1995), performance evaluation (e.g., Highhouse & Gallo, 1997), and marketing (Johar, Jedidi, & Jacoby, 1997). These studies have shown that both primacy and recency effects may exist, depending on the specific situation and evaluation process. For example, in consumer marketing, Johar, Jedidi, and Jacoby (1997) found that a recency effect exists in consumer evaluations of new brands over time as more information about the brands is acquired. On the other hand, in the auditing process,

Anderson and Maletta (1999) found that a primacy effect may lead to over-auditing and thus, to an inefficient use of audit resources. We are not aware of any research aimed at examining whether there is evidence to suggest a service-ending effect or a primacy/recency effect in the service process provided by e-tailers. Next, we discuss and present the hypotheses to be tested below.

Testable Hypotheses for e-Service Services provided by e-tailers are delivered in two phases, i.e., pre-sale services and post-sale services. Therefore, customers may experience two separate service encounters: a pre-sale service encounter and a post-sale service encounter. This separation exists because of the time lag that occurs between purchase and product fulfillment (Cao & Zhao, 2004). A practical way to separate pre-sale and post-sale e-service encounters is to examine service dimensions before a customer completes a purchase transaction (i.e., before checkout from the e-tailers site), and service dimensions after checkout (i.e., when product fulfillment occurs). Based on the theories from the behavioral sciences described above, we will test hypotheses regarding the impact of post-sale service relative to pre-sale service on customer intentions to shop again and on customer overall rating of the e-service. To test the hypotheses we will use data collected from the Web site BizRate.com which reports customer evaluations on e-tailers’ overall services, specifically, customers’ likelihood to buy again (repurchase intention), customers’ overall experience with a particular purchase (overall ratings). We will use these two variables as the dependent variables (see Table 3 below). The data allows us to examine the impact of eight pre-sale independent variables and five post-sale independent variables on the two dependent variables (see Tables 1–3 below). Our hypotheses are: H1: Post-sale satisfaction variables have a stronger effect on repurchase intention than do pre-sale variables. H2: Post-sale satisfaction variables have a stronger effect on overall ratings than do pre-sale variables. In the next section we discuss the research method and data used to test the hypotheses.

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION

37

RESEARCH METHOD AND DATA TABLE 1

Independent Variables: Pre-sale Services

PRE-SALE VARIABLES

EXPLANATION

EASE

How easily were you able to find the product you were looking for

SELECTION

Types of products available

CLARITY

How clear and understandable was the

PRICE

Prices relative to other Web sites

LOOK

Overall look and design of the site

SHIP-FEE

Shipping charges

product information

SHIP-OPTIONS

Desired shipping options were available

CHARGE

Total purchase amount (including shipping/ handling charges) displayed before order submission

TABLE 2

Independent Variables: Post-sale Services

POST-SALE VARIABLES

EXPLANATION

AVAILABILITY

Product was in stock at time of expected delivery

TRACKING

Ability to track orders until delivered

ON-TIME

Product arrived when expected

EXPECTATION

Correct product was delivered and it worked as described/depicted

SUPPORT

Availability/Ease of contacting, courtesy & knowledge of staff, resolution of issue

TABLE 3

Dependent Variables

DEPENDENT VARIABLES

EXPLANATION

SHOP-AGAIN

Likelihood to buy again from this store

OVERALL-RATING

Overall experience with this purchase

38

JOURNAL OF INTERACTIVE MARKETING

To test the hypotheses above, we collected data from the Web site BizRate.com. BizRate combines customer feedback data on services delivered by e-tailers from two different sources. The first source is e-tailers that have allowed BizRate to collect feedback directly from their customers as they make purchases. These customers are asked to fill-in a feedback form and to comment on their experiences directly after a purchase at the checkout (receipt) page. There is a followup e-mail to question post-order service quality.1 The second source is a panel of over 1.3 million active online shoppers who have volunteered to provide ratings and complete the same feedback form. All customer evaluations are calculated for each source separately by BizRate.com, however, only the weighted averages of the responses (across the two sources) are shown on the Web site (see Appendix A). BizRate screens the customer ratings, and excludes those e-tailers that do not have a sufficient number of customer reviews (less than 20 within three months). They also exclude suspicious reviews, etailers with unusual activity, or those that have recently changed sites. The site reports the average rating for each e-tailer calculated from the customer surveys of the most recent quarter (last 90 days). It does not show the actual number of customer reviews for each e-tailer during the quarter, but it displays the total number of ratings for e-tailers since the year 2000. For this study, we selected a sample of e-tailers’ ratings from the Web site during two phases: during the third week of January 2004 and the third week of April 2004. Since BizRate ratings are calculated based on the last 90 days, the second phase represents a completely new set of customer evaluations. The collection process for each phase did not exceed two days (to make sure data did not change because of a weekly update). The sample includes only e-tailers with at least 1,000 customer evaluations since the year 2000. This ensures an expected minimum average number of more than 100 customer 1

According to BizRate “this continuous process ensures that BizRate.com is able to collect timely, detailed and accurate ratings and feedback about both the ordering and fulfillment process of every participating e-tailer.”

reviews per three months for stores in this category.2 We eliminated from the sample four observations because of data problems.3 The data in the sample include 1580 e-tailers representing an expected number of about 3 million customer reviews.4 This implies an average of about 2,000 customer reviews for each e-tailer. Table 1 shows the service variables used by BizRate to measure pre-sale service satisfaction using a questionnaire distributed at checkout (immediately after a sale is made). Table 2 shows the service variables used by BizRate to measure post-sale service satisfaction using a questionnaire distributed immediately after product delivery. Finally, Table 3 shows the summary variable used by BizRate to measure overall

customer service evaluations. To test hypotheses 1 and 2, we used the variables in Tables 1 and 2 as independent variables to predict the dependent variables in Table 3. All variables were evaluated on a scale of 1 to 10. In the evaluation form of BizRate, a scale of 1 to 5 was referred to as poor performance, a scale of 6 as satisfactory, a scale of 7 to 8 as good, and a scale of 9 to 10 as outstanding performance. The form also included facial expressions to describe the satisfaction level with each service variable (see Appendix B). Appendix C provides descriptive statistics computed for the entire sample based on the sample of 1570 observations, which are also the basis for the estimations below.

RESULTS 2

These e-tailers received a number of customer ratings ranging between 1,000 and 2,500. The average of this range is 1,750, and if we divide it by 16 quarters (equivalent to the period since January 2000), we obtain 109 ratings for the January 2004 data and 103 ratings for the April 2004 data. This is a conservative estimation for two reasons: (1) not all businesses have operated since the year 2000, so the number of quarters may be lower and (2) online business has increased over time. The number of customer ratings is therefore likely to have increased within the last three years. If we had included the next smaller range of 500 to 1000 customer reviews since 2000, we would only have an average of about 47 for the January 2004 data and 44 for the April 2004 data customer reviews for these e-tailers for three months (750/16 ⫽ 46.88 or 750/17 ⫽ 44.12).

We used two approaches to strengthen the evidence needed to support hypotheses 1 and 2. First, we examined the impact of each post-sale variable relative to the impact of each pre-sale variable on the dependent variables. Second, we examined the impact of the post-sale variables as a group relative to the impact of the pre-sale variables as a group on the dependent variables.

3

To estimate the impact of individual pre-sale variables (Table 1) and post-sale satisfaction variables (Table 2) on the dependent variables (Table 3), we ran a regression to determine which of the independent variables had a significant effect (at the 5% level) on the dependent variables. The cross-section OLS estimations reported in Table 4 can explain more than 90% of the variance in the dependent variables of both regressions (the corresponding correlation coefficients between the significant predictor variables are reported in Appendix D). To check for multicollinearity, we used variance inflation factors (VIF), which are the diagonal elements of the inverse correlation matrix.5 The VIF values reported in Table 4 are below 10, showing that there is no serious multicollinearity

Four e-tailers were excluded from the January data: Microsoft’s overall customer rating was indicated to be 10.8, which is actually outside the scale of 1–10 used by BizRate. Data for eCampus was not available, although it was rated by more than 5000 customers. For MarkTheWorld.com, overall customer rating was available, but all the detailed customer ratings were missing. EtechWarehouse.com was the only retailer with extremely bad customer ratings (1.0 for overall customer satisfaction and for the intention to shop again).

4

BizRate only reports the minimum number of customer evaluations obtained for each e-tailer (for example, at least 5,000 evaluations, or at least 10,000 evaluations). From these data, we inferred the ranges of customer evaluations for each e-tailer (for example, between 5,000 and 10,000 customer evaluations). To estimate the total number of customer evaluations, we used the midpoint of each interval. These midpoints however reflect the total number of evaluations obtained since BizRate started to evaluate the e-tailer. Therefore we divided each midpoint by the number of quarters an e-tailer is evaluated by BizRate. Then we added up these numbers to obtain a total of 3,181,558 customer evaluations. We excluded evaluations obtained for travel agencies, airlines, and other service providers because BizRate uses a different evaluation form for these businesses.

The Impact of Individual Variables on Dependent Variables

The variance inflation factors VIFi are given by (1 ⫺ R2i ) ⫺1 where 2 Ri is the R2 obtained from regressing the ith independent variable on

5

all the other independent variables. Consequently, a high VIF indicates that R2i is close to 1 and therefore suggests multicollinearity.

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION

39

TABLE 4

OLS Estimates: Relevant Pre-sale and Post-sale Effects on Dependent Variables

PREDICTOR VARIABLES Constant PRE-SALE VARIABLES

SHOP-AGAIN*

VIF

⫺1.52 (.00)

OVERALL-RATING* ⫺1.63

(.00)

VIF

⫺.18 (.00) .07 (.00) .08 (.00)

3.97 2.51 3.70

⫺.01 ⫺.04 .01

(.63) (.02) (.50)

3.79 2.51 3.70

PRICE LOOK SHIP-FEE

.03 (.14) .08 (.01) .00 (.40)

2.10 3.53 1.58

.03 .05 .02

(.00) (.05) (.00)

2.10 3.53 1.58

SHIP-OPTIONS CHARGES

.00 (.61) .14 (.00)

2.04 2.71

⫺.01 .08

(.22) (.00)

2.04 2.71

.08 .00 .25 .32 .30

4.48 6.30 8.10 3.78 5.67

.14 (.00) .00 (.58) .37 (.00) .31 (.00) .22 (.00)

4.48 6.30 8.10 3.78 5.67

EASE SELECTION CLARITY

POST-SALE VARIABLES AVAILABILITY ORDER TRACKING ONTIME EXPECTATION SUPPORT Fmodel (p level) R2 (R2 adjusted)

(.00) (.70) (.00) (.00) (.00)

1425.2 (.000) .923 (.922)

2477.9 (⬍.001) .954 (.954)

* Significance levels in parenthesis.

problem among the independent variables (Neter, Wasserman, & Kutner, 1985). For the first equation (SHOP-AGAIN), five predictor variables from the “pre-ordering satisfaction” category (“ease of finding what you are looking for,” “selection of products,” “clarity of information”, “overall look and design of the site” and “charges stated clearly before order submission”) have a significant impact on the dependent variable repurchase intention, and four predictor variables from the “post-fulfillment satisfaction” category (“availability of products you wanted,” “on-time delivery,” “product met expectations,” and “customer support”) have a significant impact on repurchase intention. For the second equation, again, five predictor variables from the category “pre-ordering satisfaction” category (“selection of products,” “prices relative to other on-line merchants,” “overall look and design of the site,” “shipping charges,” and “charges stated clearly before order submission”) have a significant impact on the dependent variable overall rating. The

40

JOURNAL OF INTERACTIVE MARKETING

same four predictor variables as above (from the “post-fulfillment satisfaction” category) have a significant impact on overall rating. Table 4 highlights the following result which is consistent with the hypotheses above. The three independent variables with the strongest impact on repurchase intention and overall rating are those from the postsale group (specifically, ONTIME, EXPECTATION, and SUPPORT). Except for two cases, all estimated significant coefficients in the regression are positive. The variable “presale satisfaction” coefficient EASE has a significantly negative effect on the dependent variable SHOPAGAIN. A possible explanation is that sites with detailed content and graphics may also be difficult to navigate. If “ease of finding what you are looking for” also means less information, the impact on repurchase intentions could be negative. The coefficient of SELECTION has a significantly small negative effect on the dependent variable OVERALL-RATING. A possible explanation is that a large selection of products might

be confusing for customers, which has a negative impact on the dependent variable. Interestingly, SELECTION has a positive impact on repurchase intention. This implies that a large selection may motivate customers to shop again even though the impact of this independent variable on overall rating is negative. To test whether the group of pre-sale variables and the group of post-sale satisfaction variables is significantly different from zero, we conducted a partial F test for both regressions (Fpre-sale ⫽ 19.10 ( p ⬍ .01); Fpost-sale ⫽ 1776.99 ( p ⬍ .01) in the SHOP-AGAIN regression, and Fpre-sale ⫽ 12.57 ( p ⬍ .01); Fpost-sale ⫽ 3378.31 ( p ⬍ .01) in the OVERALL-RATING regression). The null hypothesis, that all post-sale coefficients are zero, can be rejected at the 1% level for both regressions. For the pre-sale coefficients, the null hypothesis can be rejected at the 5% level only for the overall-rating regression. Next, we show that the coefficients of the independent variables EXPECTATION, SUPPORT, and ONTIME from the post-sale variable group have a significantly greater impact on both dependent variables than any of the pre-sale satisfaction variables. To obtain this result, we tested whether each of the significant coefficients in the post-sale group is equal to each of the significant coefficients in the pre-sale group (Greene, 2003, pp. 486–488). Table 5 reports the Wald statistics of these tests for the SHOP-AGAIN equation. The null hypotheses

Wald-Statistics of Pre-sale vs. Post-sale Coefficients: SHOP-AGAIN

TABLE 5

PRE-SALE COEFFICIENTS SELECTION CLARITY LOOK

POST-SALE

EASE

COEFFICIENTS

⫺.18

.07

.08

.08

.14

AVAILABILITY .08

45.62 (p⬍.001)

.01 (p⬎.1)

.004 (p⬎.1)

.004 (p⬎.1)

5.01 (p⬍.03)

ONTIME .25

137.19 (p⬍.001)

40.70 (p⬍.001)

27.25 21.48 (p⬍.001) (p⬍.001)

11.56 (p⬍.001)

EXPECTATION .32

186.64 (p⬍.001)

69.27 (p⬍.001)

56.69 61.76 (p⬍.001) (p⬍.001)

32.71 (p⬍.001)

SUPPORT .30

224.03

76.08

(p⬍.001)

(p⬍.001)

57.02

44.23

(p⬍.001) (p⬍.001)

CHARGES

33.42 (p⬍.001)

TABLE 6

Wald-Statistics of Pre-sale vs. Post-sale Coefficients: OVERALL-RATING

PRE-SALE COEFFICIENTS POST-SALE SELECTION COEFFICIENTS ⫺.04

PRICE .03

LOOK .05

SHIP-FEE CHARGES .02 .08

AVAILABILITY .13

53.40 (p⬍.001)

25.71 (p⬍.001)

14.49 (p⬍.001)

69.16 (p⬍.001)

8.05 (p⬍.005)

ONTIME .38

327.46 (p⬍.001)

254.86 (p⬍.001)

129.96 (p⬍.001)

483.12 (p⬍.001)

142.74 (p⬍.001)

EXPECTATION .30

227.72 (p⬍.001)

190.78 (p⬍.001)

119.88 (p⬍.001)

385.25 (p⬍.001)

87.98 (p⬍.001)

SUPPORT .23

168.09

143.00

46.19

335.46

44.39

(p⬍.001)

(p⬍.001)

(p⬍.001)

(p⬍.001)

(p⬍.001)

(that the significant coefficients are equal to each other) are rejected for three of the coefficients in the post-sale group at 1% significance level or better. Table 6 reports the Wald statistics of these tests for the overall-rating equation. The null hypotheses (that the significant coefficients are equal to each other) can be rejected for all four coefficients in the post-sale group at 1% significance level or better. These tests confirm the main result stated above which is consistent with hypotheses 1 and 2.

Pre-sale Satisfaction Versus Postsale Satisfaction as Groups In this section, we derive three results about the relative importance of the post-sale satisfaction variables as a group compared to the pre-sale satisfaction variables as a group. First, we show that the set of the post-sale satisfaction variables has a significantly stronger impact on the repurchase intention and overall rating than do the pre-sale satisfaction variables. This result is obtained by computing F statistics that test whether the group of post-sale variables has the same impact on the dependent variables as the set of pre-sale variables (Judge, Griffiths, Carter Hill, Lütkepohl, & Lee, 1985, pp. 182–187 or Greene, 2003,

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION

41

p. 593). The null hypotheses that there is no difference in the impact between the groups is rejected for both dependent variables (F ⫽ 460.76 (⬍.001) in the SHOPAGAIN equation and F ⫽ 1223.96 (⬍.001) in the OVERALL-RATING equation). The following two results were obtained by applying the principal component regression method to estimate the impact of the post-sale variables as a group, and the impact of the presale variables as a group on the dependent variables (Judge, Griffiths, Carter Hill, Lütkepohl, & Lee, 1985; Liu, Kuang, Gong, & Hou, 2003; McCallum, 1970).

Z-SHOP-AGAIN COEFFICIENTS

Using principal component analysis, one can gather correlated independent variables into principal components. For each set of variables (post-sale and presale) we use the first principal component.6 A meaningful application of this method requires a theoretical explanation for the resulting principal component. It also requires that all variables that define the same component should be measured on the same scale. Therefore, this estimation method is appropriate because the variables in our sample are classified into two groups according to the timing in which the services are provided: post-sale services vs. pre-sale services, and because BizRate uses the same scale to measure all variables. We proceeded by extracting one principle component (PC_PRE-SALEi and PC_POST-SALEi) for each group of independent variables in each regression (i ⫽ SA for SHOP-AGAIN regression and i ⫽ OR for OVERALL-RATING regression). Then, to estimate the impact of the principle components on the dependent variables, we perform regression analysis in

6

The first principal component of a set of variables is a weighted average of the variables in which the weights are chosen to make the composite variables reflect the maximum possible proportion of the total variance in the group of variables. Technically speaking, a principal component analysis aims at extracting the eigenvectors out of a covariance matrix. That being done, the eigenvector with the highest eigenvalue is the principal component which explains the variation of the underlying covariates according to his eigenvalue.

42

JOURNAL OF INTERACTIVE MARKETING

T-

Z-OVERALL-RATING VIF

a

VALUE

COEFFICIENTS

T-

VIF a

VALUE

Constant PC-PRE-SALEi

.00

.00

.00

.00

.04

8.06

1.64

.03

7.09

1.64

PC-POST-SALEi

.44

86.72

1.64

.46

121.32

1.64

Fmodel (p level)

Next, we show that the impact of post-sale customer satisfaction on repurchase intention is more than 10 times stronger than the impact of pre-sale satisfaction and that the impact of post-sale customer satisfaction on overall rating is roughly 15 times stronger than the impact of pre-sale satisfaction.

Principal Component Regression: Pre-sale vs. Post-sale effects

TABLE 7

R2 (R2 adjusted)

6954 (<.001)

13024 (<.001)

.899 (.899)

.943 (.943)

i ⫽ SA for SHOP-AGAIN regression and i ⫽ OR for OVERALL-RATING regression. a

All coefficients are significant at p ⬍ .001.

which the dependent variables are standardized (Z-SHOP-AGAIN and Z-OVERALL-RATING).7 Table 7 shows that the estimated coefficients for presale and post-sale satisfaction are all significant. The impact of the post-sale satisfaction components on the dependent variables is more than 10 times stronger than the impact of the pre-sale satisfaction components. Increasing post-sale satisfaction by one scale point will lead to a roughly equivalent increase of .45 in repurchase intention and overall rating. However, increasing pre-sale satisfaction by one scale point will lead to an increase of only a .04 scale-point in repurchase intention, and a .03 scale-point in overall rating. Multicollinearity between the predictor variables does not seem to be a problem as indicated by the variance inflation factors (VIF). We calculated a VIF of 1.64 for the SHOP-AGAIN equation as well as for the OVERALL-RATING equation. These results strongly support hypotheses 1 and 2. 7

This standardization is required by the principal component regression. The eigenvector for the pre-sale satisfaction variables is given by: (EASE SELECTION CLARITY PRICE LOOK SHIP-FEE SHIP-VARIETY CHARGES) ⫽ (0.40076 0.36662 0.38612 0.30077 0.36463 0.26375 0.33657 0.38681) whereas the eigenvector for the group of post-sale satisfaction variables looks as follows: (AVAILABILITY ORDER-TRACKING ONTIME EXPECTATION SUPPORT) ⫽ (0.43466 0.45373 0.46392 0.43033 0.45254). These figures indicate that, e.g., EASE, CLARITY, and CHARGES have the highest impact on pre-sale satisfaction. In contrast, all items of the second group have roughly the same impact on post-sale satisfaction.

CONCLUSION E-tailers deliver service in two phases: pre-sale service and post-sale service. To allocate investments in service quality between these two phases, e-tailers must understand how customer satisfaction with each phase affects repurchase intention and overall service rating. We estimated that (a) the impact of post-sale customer satisfaction on repurchase intention is more than 10 times stronger than the impact of pre-sale satisfaction, and (b) the impact of post-sale customer satisfaction on overall rating is roughly 15 times stronger than the impact of pre-sale satisfaction. Because of these “recency effects” (post-sale satisfaction has a stronger impact on the dependent variables than does pre-sale satisfaction), e-tailers are advised to allocate sufficient resources to post-sale service to help improve repurchase likelihood and overall service ratings. The most important post-sale items that influence repurchase intention are (1) product met expectations; (2) customer support; (3) on-time delivery; and (4) availability of the product you wanted. These items also significantly influence overall rating, but in a slightly different order: (1) on-time delivery; (2) product met expectations; (3) customer support; and (4) availability of the product you wanted. Except for EASE in the SHOP-AGAIN equation, all the pre-sale service items have a relatively small impact on both repeat-purchase intentions and overall ratings, as shown in Table 4. These results are based on 3 million customer evaluations of 1580 e-tailers, however, the BizRate data does not represent all buyers because only a fraction of e-customers self-select to complete the evaluation forms. Another limitation is that we focused on e-tailers that sell a large array of different products, and therefore satisfaction with specific product sales is not assessed. Future research should address these limitations, as we discuss further below. Investigating how pre-sale or post-sale satisfaction impacts overall customer satisfaction and intentions to shop again is of interest to practitioners who must understand how to better allocate their market budgets to service activities. It can also contribute to academics, because previous studies in behavioral sciences have identified primacy and recency effects in a variety of business contexts, but not in the service delivery process provided by e-tailers. Although there is a

plethora of research on customer satisfaction from services in general, the growing popularity of e-commerce created the need for studies specifically aimed at exploring important dimensions that affect customer satisfaction and loyalty in e-tailing. The existing studies aimed at exploring these dimensions, as discussed in the second section, did not focus on the relative importance of pre-sale or post-sale satisfaction variables. Our study is consistent with the results of Wolfinbarger and Gilly (2003) who found that fulfillment/reliability is the largest and the most consistent predictor of e-tail quality. On the other hand, Szymanski and Hise (2000) found pre-sale satisfaction variables as convenience, site design, and financial security dominant for consumer assessment of e-satisfaction. Srinivasan, Anderson, and Ponnavolu (2002) also found that pre-sale satisfaction variables have positive effects on word-of-mouth, willingness to pay, and loyalty. However, the respondents in their sample were not surveyed immediately after an actual purchase as was the case in our sample. One limitation of our study is that pre-sale variables were measured after an actual purchase whereas postsale variables and dependent variables were measured after delivery. Therefore, the recency effects in our study could be strong because of the proximity of measurement. However, post-service satisfaction, repeatpurchase intentions, and overall ratings are typically assessed by Internet shoppers after the buying experience ends. Therefore, if the measurement-related recency effects matter in our study, they also matter in “real-time” Internet shopping because the same timing of forming evaluations applies. That is, customers could put more emphasis on post-purchase satisfaction when they form repurchase intentions and overall ratings because post-purchased experiences are memorized better than pre-purchase experiences at the time of evaluation. Consequently, we believe that recency effects are estimated correctly given that the dependent variables are measured after delivery. Nevertheless, it would be interesting to isolate these effects within lab experiments to gain more insight. Future research should be aimed at taking different approaches to measuring primacy/recency effects in service delivery by e-tailers, and also to investigate whether these effects vary within specific product categories.

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION

43

REFERENCES Anderson, N.H. (1981). Foundation of Information Integration Theory. New York: Academic Press. Anderson, B.H., & Maletta, M.J. (1999). Primacy Effects and the Role of Risk in Auditor BeliefRevision Processes. A Journal of Practice & Theory, 18(1), 75–90. Ariely, D., & Carmon, Z. (2000). Gestalt Characteristics of Experiences: The Defining Features of Summarized Events. Journal of Behavioral Decision Making, 13(2), 191–201. Asare, S.K. (1992). The Auditor’s Going-Concern Decision: Interaction of Task Variables and the Sequential Processing of Evidence. Accounting Review, 67(2), 379–394. Asch, S. (1946). Forming Impressions of Personality. Journal of Abnormal and Social Psychology, 41(3), 258–290. Burke, R.R. (2002). Technology and the Customer Interface: What Consumers Want in the Physical and Virtual Store. Journal of the Academy of Marketing Science, 30(Fall), 411–432 Cao, Y., & Zhao, H. (2004). Evaluations of E-tailers’ Delivery Fulfillment: Implications of Firm Characteristics and Buyer Heterogeneity. Journal of Service Research, 6(4), 347–361. Chase, R.B., & Dasu, S. (2001, June). Want to Perfect Your Company’s Service, Use Behavioral Science. Harvard Business Review, 79–84. Evanschitzky, H., Gopalkrishnan, I.R., Hesse, J., & Ahlert, D. (2004). E-satisfaction: A Re-examination. Journal of Retailing, 80(3), 239–247. Fornell, C. (1992). A National Customer Satisfaction Barometer: The Swedish Experience. Journal of Marketing, 56(1), 6–22. Greene, W.H. (2003). Econometric Analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall. Haugtvedt, C.P., & Wegener, D.T. (1994). Message Order Effects in Persuasion: An Attitude Strength Perspective. Journal of Consumer Research, 21(1), 205–220. Highhouse, S., & Gallo, A. (1997). Order Effects in Personnel Decision Making. Human Performance, 10(1), 31–47. Johar, G.V., Jedidi, K., & Jacoby, J. (1997). A VaryingParameter Averaging Model of On-line Brand. Journal of Consumer Research, 24(2), 232–247. Johnson, E.N. (1995). Effects of Information Order, Group Assistance, and Experience on Auditors Sequential Believe Revision. Journal of Economic Psychology, 16(1), 137–160. Judge, G.G., Griffiths, W.E., Carter Hill, R., Lütkepohl, H., & Lee, T.-C. (1985). The Theory and Practice of Econometrics, second edition. New York: Wiley.

44

JOURNAL OF INTERACTIVE MARKETING

Kahneman, D., Fredrickson, B.L., Schreiber, C.A., & Redelmeier, D.A. (1993). When More Pain is Preferred to Less—Adding a Better End. Psychological Science, 4(6), 401–405. Liu, R.X., Kuang, J., Gong, Q., & Hou, X.L. (2003). Principal Component Regression Analysis with SPSS. Computer Methods and Programs in Biomedicine, 71, 141–147. Loverman, G.W. (1998). Employee Satisfaction, Customer Loyalty, and Financial Performance. Journal of Service Research, 1(1), 18–31. Lund, F. (1925). The Psychology of Believe IV. The Law of Primacy in Persuasion. Journal of Abnormal and Social Psychology, 20, 183–191. McCallum, B.T. (1970). Artificial Orthogonalization in Regression Analysis. Review of Economics and Statistics, 52, 110–113. Neter, J., Wasserman, W., & Kutner, M.H. (1985). Applied Linear Statistical Models. Homewood, IL: Richard D. Irwin. Oliver, R.L. (1977). Effect of Expectation and Disconfirmation on Postexposure Product Evaluations. Journal of Applied Psychology, 62(4), 480–487. Oliver, R.L. (1993). Cognitive, Affective, and Attribute Bases of the Satisfaction Response. Journal of Consumer Research, 20(3), 418–431. Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1988). SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality. Journal of Retailing, 64(1), 12–41. Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1994). Reassessment of Expectations as a Comparison Standard in Measuring Service Quality: Implications for Further Research. Journal of Marketing, 58(1), 111–125. Redelmeier, D.A., & Kahneman, D. (1996). Patients’ Memories of Painful Medical Treatments: Real-time and Retrospective Evaluations of Two Minimally Invasive Procedures. Pain, 66(1), 3–8. Reichheld, F.F. (2003). The One Number you Need to Grow. Harvard Business Review, December, 46–54. Srinivasan, S.S., Anderson, R., & Ponnavolu, K. (2002). Customer Loyalty in e-commerce: An Exploration of its Antecedents and Consequences. Journal of Retailing, 78(1), 41–50. Szymanski, D.M., & Henard, D.H. (2001). Customer Satisfaction: A Meta-Analysis of the Empirical Evidence. Journal of the Academy of Marketing Science, 29(1), 16–35. Szymanski, D.M., & Hise, R.T. (2000). E-Satisfaction: An Initial Examination. Journal of Retailing, 76(3), 309–322. Winer, R.S. (2001). A Framework for Customer Relationship Management. California Management Review, 43, 89–105.

Wolfinbarger, M., & Gilly, M.C. (2003). eTailQ: Dimensionalizing, Measuring and Predicting E-tail Quality. Journal of Retailing, 79(3), 183–198. Yi, Y. (1990). Direct and Indirect Approaches to Advertising Persuasion—Which is More Effective. Journal of Business Research, 20(4), 279–291.

APPENDIX A

Zeithaml, V.A., Parasuraman, A., & Malhotra, A. (2002). Service Quality Delivery through Web Sites: A Critical Review of Extant Knowledge. Journal of the Academy of Marketing Science, 30(Fall), 362–375.

EXAMPLE OF A BIZRATE WEB SITE SHOWING THE 15 QUALITY RATINGS 1 bookstreet is Not Customer Certified Over 25,000 customers have rated this store since 2000

Detailed Store Ratings

6.6 out of 10

Would shop here again Likelihood to buy again from this store

6.0 out of 10

Overall rating Overall experience with this purchase

Post-Fulfillment Satisfaction

7.1 out of 10

4.7 out of 10

Order tracking Ability to track orders until delivered

4.8 out of 10

On-time delivery Product arrived when expected

Pre-Ordering Satisfaction Ease of finding what you are 8.0 out of 10 looking for How easily were you able to find the product you were looking for

7.6 out of 10

Selection of products Types of products available

7.7 out of 10

Clarity of product information How clear and understandable was the product information

8.8 out of 10

Prices relative to other online merchants Prices relative to other Web sites

8.0 out of 10

Overall look and design of site Overall look and design of the site

7.1 out of 10

Shipping charges Shipping charges

8.0 out of 10

Variety of shipping options Desired shipping options were available

8.7 out of 10

Availability of product you wanted Product was in stock at time of expected delivery

7.5 out of 10

Product met expectations Correct product was delivered and it worked as described/depicted

3.5 out of 10

Customer support Availability/Ease of contacting, courtesy & knowledge of staff, resolution of issue

Charges stated clearly before order submission Total purchase amount (including shipping/handling charges) displayed before order submission

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION

45

Author Proof

APPENDIX B

BIZRATE EVALUATION FORM

Please keep the following in mind when reviewing this store: 1. 2. 3.

Provide specific, relevant information about this online purchase experience. Stick to the facts and try to be as accurate as possible. Only write a review after you have/were scheduled to receive the product.

Online ordering process:

Ease of finding what your are looking for Selection of products Clarity of product information Prices relative to other online merchants Overall look and design of site Shipping charges Variety of shipping options Charges stated clearly before order submission Delivery/Fulfillment of your online order:

Availability of product you wanted Order tracking On-time delivery Product met expectations Customer support Overall satisfaction:

Would shop here again Overall rating

Symbol Key Outstanding Good Satisfactory Poor

46

JOURNAL OF INTERACTIVE MARKETING

1

2

3

4

5

6

7

8

9

10

n/a

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

o o o o o o o o o

1

2

3

4

5

6

7

8

9

10

n/a

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

o o o o o

1

2

3

4

5

6

7

8

9

10

n/a

o o

o o

o o

o o

o o

o o

o o

o o

o o

o o

o o

APPENDIX C

DESCRIPTIVE STATISTICS

Variable SHOP-AGAIN OVERALL-RATING EASE SELECTION CLARITY PRICE LOOK SHIP-FEE SHIP-OPTIONS CHARGE AVAILABILITY TRACKING ONTIME EXPECTATION SUPPORT

Mean

SD

Min

Max

8.52 8.42 8.54 8.48 8.40 8.45 8.43 7.56 8.32 8.94 8.72 8.42 8.62 8.75 8.19

0.73 0.73 0.33 0.36 0.39 0.43 0.33 0.97 0.52 0.36 0.59 0.77 0.74 0.56 0.96

1.30 1.60 5.80 6.00 4.00 5.70 5.70 3.00 4.10 2.50 4.50 3.10 3.10 3.10 1.00

9.70 9.60 9.50 9.40 9.40 9.40 9.20 9.80 9.40 9.80 9.80 9.70 9.80 9.80 9.90

Number of observed cases n ⫽ 1570 for all variables.

APPENDIX D

EASE SELECTION CLARITY PRICE LOOK SHIP-FEE SHIP-OPTIONS CHARGES

CORRELATION MATRIX FOR THE PRE-SALE VARIABLESC EASE

SELECTION

CLARITY

PRICE

LOOK

SHIP-FEE

SHIP-OPTIONS

1 .67 .72 .54 .75 .36 .51 .63

1 .70 .44 .58 .31 .46 .59

1 .35 .75 .26 .47 .67

1 .26 .48 .51 .52

1 .28 .45 .60

1 .53 .47

1 .58

Correlation matrix for the Post-sale variablesc

AVAILABILITY TRACKING ON-TIME EXPECTATION SUPPORT

c

AVAILABILITY

TRACKING

ON-TIME

EXPECTATIONS

SUPPORT

1 .74 .81 .78 .73

1 .90 .73 .85

1 .75 .86

1 .77

1

All the correlations for the Pre-sale and Post-sale variables are significant at p ⬍ .01.

PRE-SALE vs. POST-SALE e-SATISFACTION: IMPACT ON REPURCHASE INTENTION AND OVERALL SATISFACTION

47