Journal of Retailing and Consumer Services 31 (2016) 228–238
Contents lists available at ScienceDirect
Journal of Retailing and Consumer Services journal homepage: www.elsevier.com/locate/jretconser
Identifying opportunities for improvement in online shopping sites Gerson Tontini Regional University of Blumenau – FURB, Antônio da Veiga Street, No 140, Room D 102, Victor Konder District, 89012-900 Blumenau, Santa Catarina, Brazil
art ic l e i nf o
a b s t r a c t
Article history: Received 13 May 2015 Received in revised form 17 February 2016 Accepted 24 February 2016
The aim of this study is to show how different methods may provide online shopping managers with information regarding which attributes affect customer satisfaction, and how to identify what to improve or offer in the market. For this purpose, 409 Brazilian users of online shopping answered questionnaires, evaluating 26 attributes. These attributes are grouped on five dimensions: Accessibility, Fault recovery, Security, Flexibility, and Interaction/feedback. The present study evaluates different actions suggested by Importance Performance Analysis (Martilla and James, 1977; Slack, 1994) and Improvement Gap Analysis (Tontini and Picolo, 2010), exploring the limitations and strengths of each method. The results show that Improvement Gap Analysis overcomes the limitations of Importance Performance Analysis, related to the nonlinear relationship between attribute performance and customer satisfaction. & 2016 Elsevier Ltd. All rights reserved.
Keywords: Online shopping sites Customer satisfaction Identifying opportunities for improvement Improvement gap analysis Importance performance analysis
1. Introduction The number of internet users is 3.3 billion, representing around 45% of the world's population in 2015 (http://www.inter networldstats.com/stats.html). In this environment, the main determinant of success or failure in e-commerce is not just the price, but also the process for delivering products and the quality of the website. If these factors are good, customers accept paying more for the product or service (Rababah et al., 2011). Therefore, the “quality” of both the website and the final service plays a crucial role in attracting and retaining customers and, consequently, is essential for the success of the company on the internet (Bai et al., 2008; Rababah et al., 2011) state that, in the same way as face-toface service, an online store must continuously look for fulfillment of customers’ needs, in order to ensure return visits and win their loyalty. Several studies have attempted to understand how consumers evaluate the quality of retail websites and their services, and how this affects customer satisfaction and loyalty. In relation to assessment instruments, Stiakakis and Georgiadis (2009), among others, cite the work of 21 other authors from 2000 to 2008, addressing various dimensions of the quality of online services. Regarding the identification of how different attributes and dimensions affect the satisfaction and loyalty of users, among several other publications we can mention Ribbink et al. (2004) with 610 citations, and Cristobal et al. (2007) with 322 citations (GoogleScholar, 2016/02/13). Recently, Valvi and Fragkos (2012) synthesized the results of 62 empirical tools to measure e-loyalty. E-mail address:
[email protected] http://dx.doi.org/10.1016/j.jretconser.2016.02.012 0969-6989/& 2016 Elsevier Ltd. All rights reserved.
Thus, it is clearly of great interest to understand the dimensions of quality of online services and how to evaluate them. A few studies have attempted to evaluate methodologies focusing on how to identify what should be improved or offered on websites and in online services. Most of them use Importance Performance Analysis (IPA) (O’Neill et al., 2001; Oh and Zhang, 2010; Dong, 2012; Öz, 2012; Pokryshevskaya and Antipov, 2013). Originally proposed by Martilla and James (1977), IPA is one of the most-used methods for identifying what should be improved in products or services (Azzopardi and Nash, 2013). Regarding the use of IPA in online services, O’Neill et al. (2001), based on an adaptation of the SERVQUAL scale, apply IPA to identify what to improve in an online library service. Oh and Zhang (2010) use IPA to identify the strengths and weaknesses of foreign sites, studying which factors contribute to the Chinese preference for domestic internet services. Seng Wong et al. (2011) use IPA to evaluate e-government services. Dong (2012) proposes an evaluation model of e-commerce customer satisfaction, covering the dimensions of transaction security, product information, website design, service integrity, and product features. He proposes to define what the company should improve by using IPA, but does not show its application. Öz (2012), exploring 93 attributes in 6 groups (Service information, Purchase process, Contact and customer support, Offered services, Site navigation and usability, Company information), applies IPA to investigate what to improve on airline companies’ websites. Pokryshevskaya and Antipov (2013) apply IPA for evaluating 13 attributes of two internet stores, showing how this method may help companies to identify what to improve. Although a widely used method, the traditional IPA approach (Martilla and James, 1977) has limitations if the company accesses only its own customers (Tontini and Silveira, 2007). To overcome
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
these limitations, Slack (1994) presents an alternative for assessing the importance and performance of products or services, using diagonal IPA. Even though it is an improvement over traditional IPA, only Ahrholdt (2011), in a complementary analysis, uses diagonal IPA to evaluate what to improve on an e-tail website. A limitation of that work is that it uses statistically inferred importance, leading 12 attributes to be identified as having nonsignificant importance. Both methods, diagonal IPA (Slack, 1994) and traditional IPA (Martilla and James, 1977), do not take into account the possible nonlinearity between the performance of attributes and customer satisfaction. According to Kano et al. (1984), the nonlinear relationship between attribute performance and customer satisfaction can be classified as one-dimensional, mandatory, neutral, or attractive. Mandatory attributes (M) fulfill basic functions of the service. Customers see these attributes as prerequisites, being highly dissatisfied if these attributes are not offered or if their performance is inadequate. On the other hand, these attributes do not bring satisfaction if they are present or have sufficiently good performance. For one-dimensional attributes (O), the higher the attribute's performance, the greater the customer satisfaction, and vice versa. Attractive attributes (A) bring superior satisfaction if they are joined with high performance. However, they do not bring dissatisfaction if their performance is low. Two other types of attributes can be identified in the Kano model: neutral (N) and reverse (R). Neutral attributes do not bring about satisfaction or dissatisfaction, while reverse attributes bring more satisfaction by their absence than their presence. There are few studies dealing with the issue of nonlinearity in online services. Zhao and Dholakia (2009) and Chen and Wu (2009) use the traditional method of the Kano model for classification of attributes. Ramanathan (2010) uses a methodology proposed by Hartline et al. (2003), which is based on the same principles as Penalty and Reward Analysis (Picolo and Tontini, 2008). These studies only identify the distribution of online service attributes regarding Kano model classifications (attractive, mandatory, one-dimensional, and neutral). They do not use this nonlinear problem for identifying what should be improved or offered in e-commerce. As demonstrated by Tontini and Silveira (2007), the traditional method of Importance Performance Analysis can lead to erroneous decisions when assessing whether mandatory or attractive attributes should be improved or offered. Aiming to overcome these problems, Tontini and Picolo (2010) propose Improvement Gap Analysis (IGA), a fusion of IPA with the Kano model. There is no application of this method to online shopping sites. Thus, we arrive at the following research question: What are the differences in the results of IPA and IGA when assessing what to improve or offer in online shopping websites? To answer this question, the present study makes a comparison of how these methods identify what to improve or offer in online stores. In order to do so, first we present the general dimensions of these services. Then, we discuss traditional IPA, diagonal IPA, and IGA, exploring their possible limitations and stating research propositions. These methods were applied to a sample of 409 e-commerce customers, investigating what to improve among 26 attributes. The results show that traditional IPA tends to dismiss attractive attributes due to not considering the nonlinearity between the attributes’ performance and customer satisfaction. Furthermore, the results show that, although diagonal IPA overcomes some of the problems of traditional IPA, it does not distinguish attractive attributes from neutral ones. Finally, because it is a dynamic method, different to traditional IPA and diagonal IPA, IGA is more selective about the attributes that should be improved or offered.
229
2. Literature review 2.1. Dimensions of online services An online store is a service. However, it is a distinct service, where customers browse and decide alone, with several aspects having a different impact on customer satisfaction in comparison to in-person services. Zeithaml et al. (2000) developed one of the first models for evaluating the quality of online retail services: e-SERVQUAL. This model identified 11 dimensions: a) Access (to the website or the company when needed); b) Guarantee/trust (client feels confident when accessing); c) Ease of navigation; d) Efficiency (site is simple to use, minimal data required to be input by the customer); e) Flexibility (in conducting an electronic transaction); f) Customization/personalization (based on customer preferences and purchase histories); g) Price knowledge (on transport, total, and comparative prices); h) Security/privacy (site security, personal information is protected); i) Aesthetics of the site (appearance attributes); j) Reliability (correct technical functioning of the site, fulfillment of promises made to the customer); k) Answer (quick response to customer needs). Another model developed to assess the quality of online services comes with the ES-QUAL and E-Recs-QUAL scales, suggested by Parasuraman et al. (2005). In more recent studies these authors reduced the number of dimensions to seven: a) Efficiency (can access and use the site easily and quickly); b) Fulfillment (fulfillment of promises about order delivery and item availability); c) System availability (correct technical functioning of the site); d) Privacy (site is safe, customer information is protected); e) Reply (effective treatment of problems); f) Compensation (site compensates customers due to problems); g) Contact (service representatives available via phone or online). According to the authors, the first four dimensions constitute the “core” quality (ESQUAL scale), while the latter three constitute the “recovery” quality (e-Recs-QUAL scale). Besides Zeithaml et al. (2000) and Parasuraman et al. (2005), several other authors have sought to develop specific and different foci for scales assessing the quality of online services. We can mention eTransQual (Bauer et al., 2006), with five dimensions: a) Functionality/design; b) Pleasure; c) Process; d) Reliability; e) Responsiveness; and PESQ (Cristobal et al., 2007) with four dimensions: a) Web design; b) Customer service; c) Guarantee; d) Order management. Regarding e-commerce, Collier and Bienstock (2006) say that the conceptualization of its quality consists of three dimensions: a) Quality of the process; b) Quality of the outcome; c) Quality of the recovery. Chou and Cheng (2011), cited by Goi (2012), approach this via three dimensions: a) Quality of the online system (usability, navigability, accessibility, privacy); b) Quality of information (relevance, wealth of understanding); c) Quality of the process (responsiveness, reliability, security, and empathy). We could say that one reason previous studies use different sizes and scales is because distinct services (online shopping, banking, etc.) have different dimensions. Table, 1 shows 14 dimensions in relation to online services in general, and online shopping services in particular. Two dimensions can have a high correlation with each other, because a good design can lead to better navigation. Table, 2 shows the results of 24 studies, considering evolution over time and the relation of the dimensions researched with customers’ general evaluation of the service. Although these studies do not represent all dimensions of online services, we can see that the most-used methodology of analysis is structural equations (54%), and the most frequent output of the model (dependent variable) is customer satisfaction (29%). Regarding the researched dimensions, presented in Table, 2, the
230
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
Table 1 Quality dimensions of online services. Dimension
Description
Design Navigability Usability, information Reliability Responsiveness Security, privacy Personalization, flexibility Value Communication and interaction Access and availability Assortment/variety Prestige/image Fault recovery Product quality
Design and visual appeal of the site interface Structure and ease of navigating the site Information on products and services sold, including comparative prices Supplying as promised; correct technical functioning of the site; and quality of products Speed and accuracy of responses Trust and confidence conveyed by the site, including brand recognition Offer of individualized care, with empathetic staff, accepting customers’ special requirements, and site customization Costs involved in the site, including product pricing and shipping costs Possibility of communication between customers; performance in eventual communication; customer support; delivery tracking Website and products available all the time Variety of products available Image of the online store among customers Online service actions to attend and correct customer service due to service faults Quality of products sold
most common number of studied dimensions is 4 (mode), with a median of 5. “Reliability” is the most studied (83%), followed by “navigability/convenience” (71%), “service security” (67%), “usability/information” (58%), “responsiveness/speed” (54%), “website design” (50%), “communication/interaction” (46%), “personalization/flexibility” (33%), “fault recovery” (17%), “value” (17%), “assortment/variety (13%), “availability” (8%), “product quality” (4%), and “prestige/image” (4%). It is interesting to notice that very few researchers have studied service faults and fault recovery. This is a topic that should be further studied. Most of the studies in Table, 2 evaluate the dimensions’ relevance from the customer point of view, using statistically inferred importance, identified by structural equations or coefficients of linear regression equations. On the other hand, Kurt and Atrek (2012) and Cebi (2013) evaluated the importance of the dimensions using customers’ reasoning, also called stated importance. As we will see in Section, 2.2, these different methods for evaluation of attributes’ importance lead to distinct results, because each method in fact measures different aspects of the product or service from the point of view of customers. 2.2. Importance performance analysis Different methods are used to identify what to improve or offer in services or products. IPA, originally proposed by Martilla and James (1977), is one of the most-used methods for identifying what to improve in products and services. Typically, to conduct an analysis of importance times performance, data regarding customer satisfaction with various attributes are used to build a matrix, where the importance is on the Y-axis and the performance (current satisfaction) is shown by the X-axis. In traditional IPA (Martilla and James, 1977) the array is divided into four quadrants (Fig. 1a). An attribute with great importance and high performance is a possible competitive advantage (strength). An attribute that has a considerable importance but low performance should receive immediate attention (weakness). Attributes with low importance and low performance supposedly do not require additional effort to improve them (minor weakness). For attributes with high performance but little importance, the company may be wasting resources that could be used elsewhere (minor strength). A possible disadvantage of this approach is that it is based on quadrant analysis, where “a small change in the position of an attribute can lead to a dramatic change in inferred priority” (Eskildsen and Kristensen, 2006, p. 42). The assessment of attributes’ performance may be carried out by evaluating the performance of the company compared to its competitors, or just asking its own customers to evaluate the current performance of the company. According to Tontini and
Picolo (2010), although this last approach does not allow for competitive analysis, it is helpful for companies that do not have information about competitors’ customers’ satisfaction. The identification of attributes’ importance for customers is a key for finding improvement opportunities. In IPA, two different methods are used to identify the relevance of attributes: “stated importance” and “statistically inferred importance.” Stated importance asks customers to indicate attributes’ importance, generally on a Likert scale ranging from “not important at all” to “very important.” For statistically inferred importance, customers are asked about both their satisfaction with the current performance of each attribute and their general satisfaction with the product under study. The importance is evaluated by the impact of satisfaction with an attribute on overall customer satisfaction. “Stated importance” and “statistically inferred importance” have low convergent validity (Garver, 2003; Smith and Deppa, 2009). This lack of convergence occurs because the two methods measure different aspects of an attribute's importance (Smith and Deppa, 2009; Mikulic and Prebezac, 2008). According to Mikulic and Prebezac (2008), “stated importance” is evaluated “based on memory” or as an “expected value.” Using this method, the customer rationalizes about the expected benefits if an attribute has good performance, the possible sacrifices due to poor performance, and socially acceptable answers. On the other hand, statistically inferred importance is referred to as “experiential value,” which depends on past and present customer experiences with the attribute. Most studies using IPA methodology tend to use stated importance as a direct way of measuring the importance of selected attributes (Azzopardi and Nash, 2013; Abalo et al., 2007; Chrzan and Golovashkina, 2006; Griffin and Hauser, 1993). Bacon (2003) supports the idea that direct measures of importance reflect an attribute's importance better than using a statistically inferred method. Pokryshevskaya and Antipov (2012) show that different approaches for identification of attributes’ inferred importance (regression coefficients, Pearson's correlation, and Shapley value decomposition) lead to distinct results. According to Bottomley et al., (2000) direct measurement results are more solid concerning the estimated weights and more stable in a test–retest situation. Although stated importance is considered a more appropriate method for using IPA (Bacon, 2003), some studies show that it has a high correlation with customer satisfaction with each attribute (Mittal et al., 1999; Bacon, 2003; Matzler et al., 2004). Thus, the first proposition is: P1: Due to a linear correlation between stated importance and current performance, traditional IPA leads to a concentration of attributes classified as minor weaknesses or as strengths.
Table 2 Studies on online service quality and customer satisfaction. Parasuraman Schaupp Cheung Lee Kim Zeithaml Burke Wolfinbarger Ahn et al. (2005) and Bé- and Lee and et al. and (2002) and Gilly et al. (2005) Lin langer (2005) Stoel (2003) (2000) (2005) (2005) (2004)
Reliability Navigability/ convenience Security, privacy Usability/ information Responsiveness/ Speed Website design Communication/ interaction Personalization/ flexibility Fault recovery Value Assortment/ variety Access/ availability Product quality Prestige/image Number of aspects/ dimensions R2 Model output Methodology
Dholakia Rolland Xing Finn Ahrholdt Dong Goi Swaid (2012) (2012) et al. (2011) (2011) and Wi- and Zhao and Freeman (2010) (2010) gand (2010) (2009)
Cebi Kurt (2013) and Atrek (2012)
Dickinger No of studies and Stangl (2013)
2007
2007
2008
2009
2009
2010
2010
2010
2011
2011
2012
2012
2012
2013
2013
24
s s
s s
S S
x
x
s s
s
s
s s
x
s
x x
s s
x x
s s
20 17
n.s. s
s s
s
s
x x
s s
16 14
s
x
Publication year 2000
2002
x x
x
x x
x x
x
x
x x
x
2003
2005
2004
2005
2005
2005
2005
s
s s
s
s
x x
x x
s
x
x x
s
x
s
x x
s
s
s s
x
s s
s
n.s.
s
s
s
s s s
x
s
s s
s
s n.s. s
s
s s
s
s s
n.s.
x
x
x x
5
4
11
3
5
4
C
4 3
0.7 1 1
0.7 3 1
X 2 1
0.6 1 1
X 3 1
s
4 4 3
s s x
7
5
7
5
6
4
6
7
5
C
x 2 1
0.6 2 1
0.5 3 1
x 2 1
5
x 3 1
0.9 2 2
x 1 1
4
6
5
C
X 1/2 2
0.3 3 1
12 11 8
x
n.s.
8
13
x s
n.s.
s
s s
s s s s
x
x x s
S S
s s s
x
x x
s
n.s.
S S
4
C
s
2
s
1 1
4
4
x 8
7
C
x 4 3
x 4 4
0.8 2 1
Model output: 1 – Quality; 2 – Satisfaction; 3 – Intention to Buy; 4 – Stated Importance. Methodology: 1– Structural Equations; 2 – Regression Analysis; 3 – Analytic Hierarchy Process (AHP); 4 – DEMATEL; 5 – Kano model; C – Concept development. Statistics: x – not available, s – significant, ns – nonsignificant, not applicable.
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
Researched aspects
Zhao and Souitaris Loiacono Yen and Lu Dholakia and Bala- et el. (2008) (2009) (2007) banis (2007)
231
232
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
(major strength)
(major weakness)
QUADRANT IV
QUADRANT III
(minor weakness) 0
1
2
3
High
QUADRANT I
QUADRANT II
(minor strength) 4
5
6
7
8
9
Urgent Action
Improve
Importance
9 8 7 6 5 4 3 2 1 0
Appropriate
Low
Importance
10
10
Performance
Traditional Importance Performance Analysis
Low
Excess
Performance
High
Diagonal Importance Performance Analysis
Fig. 1. (a) Traditional importance performance analysis. (b) Diagonal importance performance analysis.
Slack (1994) overcomes the limitations of traditional IPA by presenting an alternative that considers the relationship between importance and performance. Diagonal IPA analyzes what to improve by using nonsymmetrical zones (Fig. 1b), with a 45° oblique line in standardized importance–performance scales. This approach allows a more continuous transition in the inferred priorities (Eskildsen and Kristensen, 2006). The reasoning behind this is that customers could accept lower performance on less important attributes, and require higher performance on more important attributes. Thus, the second proposition is: P2: Diagonal IPA overcomes the limitation of traditional IPA, the concentration of attributes classified as minor weaknesses or strengths, allowing a more continuous transition in the priorities for improvement. Although diagonal IPA (Slack, 1994) overcomes the problems of original IPA (Martilla and James, 1977), neither method takes into account possible nonlinearity between the performance of attributes and customer satisfaction (Kano et al., 1984). This may lead to erroneous decisions when evaluating whether mandatory or attractive attributes should be improved or offered (Tontini and Picolo, 2010; Deng et al., 2008). The reason is that customers rationalize, giving less importance to attractive attributes than to mandatory or one-dimensional ones, and vice versa, depending on the performance of the attributes. Attributes that can be classified as attractive have a greater impact on creating satisfaction in cases of high performance than they have on creating dissatisfaction in cases of low-level performance. Conversely, mandatory attributes have a greater impact on creating dissatisfaction in cases of lowlevel performance than they have on creating satisfaction in cases of high-level performance (Mikulic and Prebezac, 2008). Thus the third proposition is:
P3: Diagonal IPA and traditional IPA tend to suggest that an attractive attribute be “kept as it is” (diagonal IPA) or to classify it as a “minor weakness” (traditional IPA). 2.3. Improvement Gap Analysis IGA (Tontini and Picolo, 2010), fuses IPA with the Kano model to identify opportunities for improvement. This method is based on two axes. The X-axis measures the gap between attributes’ desired performance and current performance. The Y-axis contains the expected customer dissatisfaction for attributes with low performance. Thus, there are three questions for each attribute: a) satisfaction with high performance or existence of it (functional); b) satisfaction with low performance or absence (dysfunctional); c) current satisfaction with the attribute, according to the example in Fig. 2. The data is coded on a scale from 1 to 9. The gap is given by the equation: satisfaction with the functional question – current satisfaction. The data are standardized and plotted on a matrix with four quadrants (Fig. 3). An attribute is classified as “attractive” when the improvement gap is higher than 0 and the possible dissatisfaction with the absence of the attribute is lower than 0. It is rated as “critical” for improvement if both the improvement gap and the expected dissatisfaction are larger than 0. You should maintain the current performance when the improvement gap is less than 0, and dissatisfaction with the absence of the attribute, or poor performance, is greater than 0. It should be evaluated, if necessary, when both the improvement gap and the expected dissatisfaction are smaller than 0. The reasoning behind IGA is that customers tend to imagine the
Fig. 2. Examples of questions for improvement gap analysis (Source: adapted from Tontini and Picolo (2010)).
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
233
Table 3 Characteristics of respondents.
Fig. 3. Improvement gap analysis.
ideal or desired situation and mark their expected satisfaction when responding to the functional question. Thus, the improvement gap indicates the possible gain in satisfaction if the attribute is enhanced to the desired situation. Similarly, when answering the question about dissatisfaction with the dysfunctional situation, customers tend to imagine the worst scenario, responding according to this. Tontini and Picolo (2010) use IGA to evaluate what to improve or offer in supermarkets. They found that IGA was successful in distinguishing between attractive and neutral attributes, suggesting to improve the attractive attributes and to “keep as is” the neutral ones. These authors also claim that “IGA can assess the impact of the improvement of an attribute that already performs well” (Tontini and Picolo , 2010, p. 581). We can say that IGA is a dynamic method, diagnosing the possible outcome if some improvement is made. Moreover, IPA methods are static, photographing a current situation. Thus, the fourth proposition is: P4: IGA identifies attractive attributes, i.e., those that can bring satisfaction to customers if they are offered, differentiating them from those that would be neutral if offered.
3. Design methodology and profile of respondents This paper presents confirmatory research (Hair et al., 2006) using a quantitative approach, and an instrument for data collection with five parts. In the first part, we asked respondents to signal their expected satisfaction or dissatisfaction with two questions for each attribute: one regarding the existence/adequacy of the attribute, and another with its absence/inadequacy. The questions of sufficiency and insufficiency are placed in random order for all attributes. The second stage asked respondents to tell their current satisfaction with each attribute, for an online store with which they have more experience. In the third section of the questionnaire, the respondents indicated the degree of importance of each attribute, on a Likert scale from “not important” to “very important,” coded 1–5. The fourth stage had personal questions about age, gender, frequency of use, and overall satisfaction with accessed services. The research was sent to a list of email contacts in Brazil, from September to November 2012. We obtained 429 completed questionnaires, using nonprobability sampling, randomly applied.
a – Shops
Frequency %
b – Age
Frequency %
Netschoes Free Market American Submarine Easy shopping Other Not scored
64 37 29 28 8 148 95
Aged o 25 26 to 30 31 to 35 36 to 40 440 Not answered
232 83 40 18 27 7
c – Genre
Frequency %
d – Buying
Frequency %
Female Male Undeclared Not answered
212 173 7 17
Discount offers Regular price Auction Not answered
199 157 9 53
e –Webstore experience
Frequency %
d – Last shop
Frequency %
More than one year
265
65%
40
10
One year Six months One month Not answered
50 29 29 36
12% 7% 7% 9%
Never/Not answered One year or more Last six months About one month Last 15 days
55 138 95 81
13 34 23 20
16% 9% 7% 7% 2% 36% 23%
54.1% 44.0% 1.9% 4.4%
57 20 10 4 % 2
48 38 2 13
Twenty questionnaires were discarded because of inconsistent or invalid responses, giving a total of 409 questionnaires used for analysis. To analyze attributes of websites that should be improved, we used IPA (Martilla and James, 1977; Slack, 1994) and IGA (Tontini and Picolo, 2010), using standardized data. A total of 26 attributes were investigated in the present research, as listed in Table, 4 and Appendix A. These attributes were established based on an interview with five users of online shopping, discussing what is required and what could be a good idea to have on the site (e.g., At07 – Price comparison between suppliers, At26 – Customizing the site design, At06 – Product comparison system, At25 – Possibility of collecting the product at a nearby store). Table, 3 shows the summary of the profile of respondents. Most of them are less than 45 years of age (87%). Okada et al. (2014) and Nielsen IBOPE (2014) identified that 120 million people had access to the internet that year, of which 60% were aged between 18 and 44 years, and 80% already had experience with online shopping at that time. The sample for the present research is different to other studies developed in Brazil, so the results (what should be improved or offered) are applicable only to the present case study. Most respondents in the present research have more than one year's experience (65%) in online shopping and bought on the evaluated site in the last six months (67%). A significant portion (48%) bought based on offers. There is a wide dispersion of respondents among the most-used shopping sites (Table, 3a). 3.1. Researched dimensions To check the reliability of the survey instrument, and to check which dimensions are behind the attributes, we performed a factor analysis with Varimax rotation and measurement of consistency by Cronbach's alpha (Table, 4). The attributes “At20 – Brand recognition” and “At15 – Comparative information regarding quality of products” were not included because of low loading on any of the dimensions. Thus, five dimensions are analyzed: a) Accessibility/speed. This dimension may also be called “efficiency” (Parasuraman et al., 2005). In addition, it includes features of navigability (Schaupp and Bélanger, 2005; Dickinger and Stangl, 2013; Cebi, 2013; Souitaris and Balabanis, 2007; Kurt and Atrek, 2012; Zhao and
234
Table 4 Varimax rotation of researched attributes. Attributes
Current Satisf.
Stated importance
()
(þ)
Mean Std. deviation
Mean Std. deviation
Expected dissatisfaction
Expected satisfaction
0.80
5.25
1.14
3.99
0.76
2.37
5.56
0.31
0.80 0.79 0.79 0.65 0.87 0.82
5.26 5.37 5.36 5.32 4.87 4.85
1.22 1.04 1.17 1.21 1.23 1.21
4.03 4.03 3.86 3.96 4.18 4.12
0.78 0.78 0.89 0.80 0.74 0.77
2.01 1.95 2.03 2.30 1.83 1.89
5.76 5.59 5.64 5.51 5.56 5.34
0.50 0.22 0.28 0.19 0.69 0.49
0.79
4.86
1.15
4.13
0.76
1.77
5.15
0.29
0.60
4.88
1.27
4.00
0.81
1.97
5.65
0.77
0.56
4.54
1.20
3.75
0.81
2.26
5.51
0.97
0.71
5.26
1.10
4.11
0.76
1.94
5.58
0.32
0.59
5.66
1.12
4.40
0.73
1.67
5.63
0.02
0.57
5.30
1.05
4.07
0.78
1.92
5.45
0.15
0.57
5.55
1.09
4.44
0.72
1.74
5.66
0.11
0.55
5.36
1.11
3.81
0.85
2.46
5.49
0.13
0.55
5.03
1.19
3.92
0.84
1.99
5.14
0.11
0.55
5.49
1.18
3.96
0.79
2.06
5.66
0.17
0.73
4.33
1.37
3.27
0.96
2.76
4.95
0.63
0.63 0.62
4.20 4.73
1.27 1.30
2.44 3.49
1.27 0.95
3.54 2.64
4.47 5.13
0.27 0.40
0.59
4.41
1.28
3.33
0.98
2.91
4.84
0.43
0.75
4.82
1.18
3.42
0.91
2.57
5.08
0.26
0.71
4.88
1.13
3.61
0.81
2.26
5.16
0.28
0.54 1 69% 0.74 4.75 3.38
4.56
1.11
3.12
1.09
2.59
4.91
0.34
Dimensions Accessibility/ speed
10.1 42% 0.9 5.31 3.97
2.93 54% 0.89 4.8 4.0
Buying reliability
1.39 60% 0.88 5.38 4.1
Flexibility Interaction/ feedback
1.09 64% 0.77 4.42 3.13
Improvement gap
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
At02 – Ease and quickness of finding the product. At03 – Speed of “checkout.” At04 – Ease of navigation. At01 – Accessibility (24 h/day). At05 – Variety of products. At18 – Fast return for contacts. At17 – Contact speed for information and complaints. At19 – Resolution of the requested issues. At16 – Technical advisory services regarding products. At24 – Indication of the nearest service center. At09 – Quality/clarity of product information. At13 – Safety and reliability in payments. At11 – Easy access to product information. At14 – Security of personal information. At10 – Offering promotional products. At08 – Technical product information. At12 – Variety in forms of payment. At07 – Price comparison between suppliers. At26 – Customizing site design. At06 – Product comparison system. At25 – Possibility to collect the product at a nearby store. At21 – Exchange ideas with other users/buyers. At23 – Availability of evaluating quality online. At22 – Access by mobile phone. Eigenvalue Variance Cronbach's Alpha Mean of Satisfaction Mean of Dimensions Stated Importance
Fault recovery
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
– Diagonal IPA (Slack, 1994)
– Traditional IPA
(Martilla & James, 1977) Strengths
Weaknesses
235
Critical
Importance
Improve
Minor weaknesses
Minor strengths
Satisfaction
Keep as is
Excess
Satisfaction
Fig. 4. Importance performance analysis. a – Traditional IPA (Martilla and James, 1977). b – Diagonal IPA (Slack, 1994).
Dholakia, 2009; Goi, 2012); b) Fault recovery, accessing the site for complaints. This dimension is addressed by other authors. HorMeyll et al. (2012, p. 1) state that “consumers who shop in online channels do not feel fully serviced, with frequent complaints about various stages of the service.” The authors found that the main complaints are related to “failures in the post-purchase customer service,” followed by “failures in delivery of the request” and “failures related to the return and exchange policy.” Yen and Lu (2008), analyzing the antecedents of satisfaction theories of confirmation (attendance expectations) and nonconfirmation (recovery of failures in expectations), found that actions to overcome confirmations have no less impact on customer satisfaction than service expectations; c) Reliability of buying, in relation to both transactions and product quality (Dong, 2012; Wolfinbarger and Gilly, 2003; Schaupp and Bélanger, 2005; Cebi, 2013; Souitaris and Balabanis, 2007; Goi, 2012); d) Flexibility in access to information and care of customer needs. This is related to flexibility in site design, according to customer requirements (Schaupp and Bélanger, 2005; Dholakia and Zhao, 2010; Goi, 2012), as well to the usability of and information on the site (Dong, 2012; Kim and Stoel, 2004; Dholakia and Zhao, 2010; Goi, 2012); e) Possibility of interaction, communication, and feedback with other users, confirming Zhao and Dholakia (2009), Dholakia and Zhao (2010), and Cebi (2013).
4. Impact of dimensions on users’ satisfaction To assess how dimensions have an impact on customer Table 5 Regression linear dimensions general satisfaction. Dimensions
Dim. 1 – Accessibility and speed Dim. 2 – Fault recovery Dim. 3 – Buying reliability Dim. 4 – Flexibility of access and services Dim. 5 – Interaction/ feedback
Nonstandard ratios
Standardized coefficients
T
B
Std. Error
Beta
0.33
0.05
0.34
7.26 0.00
0.34 0.23
0.05 0.05
0.35 0.24
7.47 0.00 4.98 0.00
0.16
0.05
0.17
3.54 0.00
0.23
0.05
0.24
5.04 0.00
satisfaction, we performed a linear regression of general satisfaction of respondents versus their satisfaction with the analyzed dimensions. The overall satisfaction with the online shopping websites was measured by three questions: a) I am satisfied with the shopping site that I evaluated; b) Customers should be satisfied with this site; c) Overall satisfaction with the site attributes. These three questions were submitted to Varimax rotation, forming a single dimension, with Cronbach's alpha ¼0.66. Hair et al. (2006) recognize this as acceptable if Cronbach's alpha is equal to or higher than 0.60. Excluding from the analysis respondents who did not fully answer the questionnaire (blank responses), only the answers of 285 respondents are used in this section. Residuals from the regression equation follow a normal distribution, and the results are shown in Table, 5. The dimensions “Fault recovery” and “Accessibility/speed” are the two with the greatest impact on overall satisfaction. This shows the importance of the possibility of customers interacting with the shopping site in case of special needs or complaints, confirming Yen and Lu (2008) and Hor-Meyll et al. (2012). In the second group are the dimensions “Reliability” and “Possibility of interaction/feedback.” The relevance of the possibility of interaction confirms Dholakia and Zhao (2010), Zhao and Dholakia (2009), and Cebi (2013), showing that nowadays, due to increasing technology, the possibility of interaction between users and services online can be a differentiation in the market. The final dimension is “Access and service flexibility,” confirming Souitaris and Balabanis's (2007) comment that “communication/interaction” with customers is more important than “personalization/ flexibility” of the service.
Sig.
5. Comparing IPA and IGA to assess what to improve or offer in online shopping sites
Note 1: R2 ¼ 0.37, F ¼ 36.65, Sig. ¼ 0.000, Skewness ¼0.04, Std. Error¼ 0.14, Kurtosis¼ 0.15, Std. Error¼0.29, N ¼284.
Using traditional IPA (Fig. 4a), the present research recommends that the attributes “At16 – Technical advisory service regarding products,” “At17 – Contact speed for information and complaints,” and “At19 – Resolution of the requested issues” need improvement. These aspects of fault recovery, which have the greatest impact on customer satisfaction (Table, 4), are those that are recommended to be improved. All other attributes (87.5%) are classified as minor weaknesses or as strengths. These results confirm what other studies found (Mittal et al., 1999; Bacon, 2003; Matzler et al., 2004; Oh, 2001): the stated importance of attributes has a high correlation with performance (R2 ¼0.62). This lack of
236
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
independence leads to an excess of attributes classified as strengths or minor weaknesses, supporting proposition P1, that “due to a linear correlation between stated importance and current performance, traditional IPA leads to a concentration of attributes being classified as minor weaknesses and strengths.” It shows a strong limitation of traditional IPA if in the analysis the performance of the company is not compared to competitors’ performance. Diagonal IPA (Fig. 4b) suggests the improvement (or offering) of seven attributes: “At07 – Compare prices between suppliers” (Flexibility), “At16 – Technical advisory services regarding products” (Fault recovery), “At17 – Speed of contact for information and complaints” (Fault recovery), “At18 – Speed of returning contacts” (Fault recovery), “At19 – Resolution of requested issues” (Fault recovery), “At24 – Indication of the nearest service center” (Fault recovery), and “At25 – Possibility of collecting the product at a nearby store” (Flexibility). These results show that diagonal IPA overcomes the problem of a high correlation between stated importance and current satisfaction, allowing a more continuous transition with inferred priorities, supporting proposition P2. Furthermore, as can be seen, most of the attributes recommended to be improved are related to the dimension “Fault recovery.” A company may prove this to be a differential over competitors, giving confidence to consumers that, if faults occur, it recovers the situation and gives support to customers. IGA (Fig. 5) recommends that the following attributes should be kept as they are: “At01 – Being accessible to shop at any time” (Accessibility), “At04 – Ease of navigating the shopping site” (Accessibility), “At08 – Technical information regarding the products” (Buying reliability), “At09 – Quality and clarity of information for understanding the products,” “At11 – Easy access to product information” (Buying reliability), “At12 – Payment options” (Buying reliability), “At13 – Safety and reliability of payment” (Buying reliability), “At14 – Security of personal information” (Buying reliability). If improved, these attributes should not bring a great increase in satisfaction for users. If their performance is poor, they can bring considerable dissatisfaction. The attributes At08, At09, At11, At12, At13, and At14 represent 86% of the dimension “Buying reliability.” This result demonstrates that, from the point of view of IGA, the level of security in online shopping is adequate. This recommendation is the same using traditional IPA (Fig. 4a) and diagonal IPA (Fig. 4b). Attributes related to Accessibility/speed are recommended “to be held” (At01, At04), or are near the dividing line of the IGA quadrants (At02, At05). Thus, according to traditional IPA, diagonal IPA, and IGA, except for the attribute “At03 – Speed to complete the purchase (registration, payment, checkout, etc.),” attributes related to this dimension have adequate performance. These results show that these methods tend to be coherent regarding what is an appropriate level of performance. In the case of At03, diagonal IPA
Expected Dissatisfaction
Improve Keep
Evaluate
Attractive
Gap (Desired - Current) Fig. 5. Improvement gap analysis (IGA).
recommends keeping it as it is (on the borderline), and traditional IPA classifies it as a strength. According to these methods, this attribute is already at an adequate level. On the other hand, according to IGA, if enhanced At03 may also improve customer satisfaction. This result shows that IGA is a dynamic method, and different to traditional and diagonal IPA methods; it is more selective regarding attributes that should be improved or offered. Except for “At26 – Possibility of customizing the site,” the attributes related to the dimension of Flexibility (“At06 – Comparing prices of products and brands,” “At07 – Possibility of price comparison between different vendors,” and “At25 – Possibility of collecting the product at a nearby store”) are classified as attractive by IGA (Fig. 3). Traditional IPA categorizes as a minor weakness all attributes of this dimension. Diagonal IPA suggests to improve some attributes (At07 and At25) and to hold others (At06 and At26). The attributes of the Interaction/feedback dimension (At21, At22, At23) tend to fall into the “evaluate” quadrant of the IGA method, but are close to being classified as “attractive,” or recommended to be kept as they are. Diagonal IPA recommends keeping them as they are, and traditional IPA classifies them as “minor weaknesses.” The results of the analysis of the attributes related to the dimensions “Flexibility” and “Accessibility/speed” partially support proposition P3; not considering the nonlinearity between attributes’ performance and customer satisfaction, traditional IPA tends to suggest to “keep as is” attractive attributes, or to classify them as “minor weaknesses.” These results also validate Tontini and Picolo (2010). IGA differentiates the attractive attributes of those that would be neutral if offered, supporting proposition P4.
6. Conclusions This study aimed to compare how different methods evaluate what could or should be improved in online shopping websites. We analyzed 26 attributes of 5 dimensions that have an impact on customer satisfaction with online shopping: a) Accessibility, including ease of website navigation, speed of browsing and shopping, and finding the product searched for; b) Fault recovery, including troubleshooting and technical assistance; c) Buying reliability, including transaction security and quality of products; d) Flexibility, including the ability to change the website design and compare products on competing sites; and e) Ability to interact and express satisfaction with the site to other users. Conducting a regression equation analysis of the dimensions shows that “Fault recovery” (0.34) and “Accessibility/speed” (0.33) have the greatest impact on customer satisfaction. Second came the dimensions of “Buying reliability” (0.23) and “Opportunity for interaction and feedback” (0.23). Last, but still significant, is “Flexibility of access and of services” (0.16). Although they have a significant relationship with general satisfaction, these dimensions explain only 37% of the variability of respondents’ overall satisfaction (R2 ¼ 0.37), indicating that other dimensions should also be affecting customer satisfaction. The results of the present research support the findings of previous studies (Mittal et al., 1999; Oh, 2001; Bacon, 2003; Matzler et al., 2004). Although “stated importance” is considered more appropriate for use with IPA (Bacon, 2003), due to a linear correlation between stated importance and current performance (R2 ¼0.62), traditional IPA (Martilla and James, 1977) tends to classify attributes as “minor weaknesses” and “strengths.” If the company does not evaluate its performance in relation to competitors, this is a limitation in using this method. We show that diagonal IPA (Slack, 1994) overcomes this limitation, allowing a more continuous transition in priorities, supporting previous
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
research (Eskildsen and Kristensen, 2006). Although overcoming the problems of traditional IPA, the present study also shows that, due to diagonal IPA not taking into consideration the nonlinearity between attributes’ performance and customer satisfaction (Kano et al., 1984), it suggests keeping attractive attributes as they are, not differentiating them from neutral ones. IGA (Tontini and Picolo, 2010) fuses the Kano model and IPA. Comparing it with traditional and diagonal IPA methods, the present study shows that IGA can identify attractive attributes (which bring satisfaction to customers if present or with high performance, but do not bring dissatisfaction if absent or with low performance), differentiating them from neutral attributes (supporting proposition P4). IGA is more selective for attributes that should be improved or offered than are IPA methods.
7. Managerial recommendations IGA is selective to differentiate attractive attributes from neutral ones, and is better than IPA to use when looking for what to improve or offer in online stores, to differentiate them in the market. The present study also identifies the coherence of IGA and diagonal IPA (Slack, 1994) regarding what is critical for improvement or should be kept as it is. Diagonal IPA and IGA show that all attributes related to “Fault recovery” should be improved, as it is a strategic aspect with which companies should deal. In fact, the present study shows that fault recovery is among the dimensions with the highest impact on customer satisfaction (Table, 5), in practice with the same or a little higher impact than “Accessibility/ speed.” These results reinforce the finding of Ahmad (2002), supported by Johnston and Fern (1999, p. 69), that recovery actions “can restore the customer to a satisfied state, whereas an enhanced set of recovery actions will delight the customer.” Both methods of IPA and IGA show that the attributes related to the issue of “Operational safety” should stay as they are. We can say that security is considered to be at an adequate level for customers who already shop on the internet. Furthermore, these methods are coherent with attributes related to “Accessibility/ speed.” Except for the attribute “At03 – Speed to complete the purchase (registration, payment, checkout, etc.),” this dimension fulfills customer expectations. Being a dynamic method, different from IPA methods, IGA suggests that increasing At03 may still increase customer satisfaction.
8. Further studies While contributing to knowledge of what affects users of online shopping, the present study has some limitations. The results cannot be considered to reflect the behavior of all users of these services, and a larger study should be performed. Furthermore, the researched dimensions do not encompass all dimensions affecting user satisfaction. Moreover, a question remains. Considering the issue of nonlinearity between the performance of attributes and general satisfaction, if they are really improved, can attributes classified as attractive by the IGA method lead to a significant improvement in the overall satisfaction of customers? Some other questions also remain: What is the perception of users who have not yet used these services? What is the impact of these dimensions on customers who are not yet buying online? These and other questions still have to be answered.
237
Acknowledgments Research Supported Research Edital MCTI/CNPq/MEC/CAPES No 18/2012, Brazil, Research Project 403863/2012-0. I acknowledge the collaboration of Júlio César da Silva, Elis Regina Mulinari Zanin, Eliane Fátima Strapazzon Beduschi, and Margarete de Fátima Marcon, students of our MBA in Business Management, for data collection for the present paper.
Appendix A. Researched attributes At01 – Accessibility to shop at any time (24 h/day) At02 – Quickness of finding the product you want At03 – Speed to complete the purchase (registration, payment, checkout, etc.) At04 – Ease of navigating the shopping site At05 – Variety of products (different models and brands of the same product type) At06 – Ability to compare prices of different products and brands (automatic comparison) At07 – Possibility of comparison between different site suppliers (competitors) At08 – Information available for inspection of products At09 – Quality and clarity of information for understanding the product At10 – Offer of promotional products (specials, sales, etc.) At11 – Ease of access to product information (details and techniques) At12 – Payment options (post-delivery, credit card, etc.) At13 – Security and reliability of payment At14 – Security of website regarding personal information provided (no transfer of information to other companies) At15 – Information about the quality of competing products (reliability, duration, etc.) for better choice At16 – Technical advisory services (guarantee) provided for products sold At17 – Speed of contact with people on the site for information or complaints At18 – Fast reply after contact for problems or request for information At19 – Resolution of problems when contacted, giving feedback on the solution At20 – Brand recognition of the site by the market At21 – Possibility of exchanging ideas with other users/buyers on the site At22 – Possibility of making purchases via mobile phone (mobile, tablet, smart phone) At23 – Possibility of evaluating the quality of products/services purchased and having access to how others have rated this quality At24 – Automatically displaying technical product assistance closest to your location At25 – Possibility of buying from the site and collecting the product at the nearest store in your location or by preference At26 – Possibility of customizing the site, changing color, position of information, and typeface
References Abalo, J., Varela, J., Manzano, V., 2007. Importance values for importance–performance analysis: a formula for spreading out values derived from preference rankings. J. Bus. Res. 60 (2), 115–121. Ahmad, S., 2002. Service failures and customer defection: A closer look at online shopping experiences. Manag. Serv. Qual. 12 (1), 19–29. Ahn, T., Ryu, S., Han, I., 2005. The impact of the online and offline features on the user acceptance of Internet shopping malls. Electron. Commer. Res. Appl. 3 (4), 405–420.
238
G. Tontini / Journal of Retailing and Consumer Services 31 (2016) 228–238
Ahrholdt, D.C., 2011. Empirical identification of success-enhancing web site signals in E-tailing: An analysis based on known E-tailers and the theory of reasoned action. J. Mark. Theory Pract. 19 (4), 441–458. Azzopardi, E., Nash, R., 2013. A critical evaluation of importance–performance analysis. Tour. Manag. 35, 222–233. Bacon, D.R., 2003. A comparison of approaches to importance-performance analyses. Int. J. Mark. Res. 45 (1), 55–71. Bai, B., Law, R., Wen, I., 2008. The impact of website quality on customer satisfaction and purchase intentions: Evidence from Chinese online visitors. Int. J. Hosp. Manag. 27 (3), 391–402. Bauer, H.H., Falk, T., Hammerschmidt, M., 2006. eTransQual: A transaction processbased approach for capturing service quality in online shopping. J. Bus. Res. 59 (7), 866–875. Bottomley, P.A., Doyle, J.R., Green, R.H., 2000. Testing the reliability of weight elicitation methods: Direct rating versus point allocation. J. Mark. Res. 37 (4), 508–513. Burke, R.R., 2002. Technology and the customer interface: What consumers want in the physical and virtual store. J. Acad. Mark. Sci. 30 (4), 411–432. Cebi, S., 2013. Determining importance degrees of website design parameters based on interactions and types of websites. Decis. Support Syst. 54 (2), 1030–1043. Chen, L.H., Wu, T.H., 2009. The research of online service recovery based on Kano's model. InL: Proceedings of the Ninth International Conference on Electronic Business, Macau, November 30–December 4, pp. 233–242. Cheung, C.M., Lee, M.K., 2005. Consumer satisfaction with internet shopping: A research framework and propositions for future research. In: Proceedings of the 7th International Conference on Electronic Commerce, Xi’an, China, August, pp. 327–334. Chrzan, K., Golovashkina, N., 2006. An empirical test of six stated importance measures. Int. J. Mark. Res. 48 (6), 717–740. Collier, J.E., Bienstock, C.C., 2006. Measuring service quality in e-retailing. J. Serv. Res. 8 (3), 260–275. Cristobal, E., Flavián, C., Guinalíu, M., 2007. Perceived e-service quality (PeSQ): Measurement validation and effects on consumer satisfaction and web site loyalty. Manag. Serv. Qual. 17 (3), 317–340. Deng, W.-J., Kuo, Y.-F., Chen, W.C., 2008. Revised importance-performance analysis: three-factor theory and benchmarking. Serv. Ind. J. 28 (1), 37–51. Dholakia, R.R., Zhao, M., 2010. Effects of online store attributes on customer satisfaction and repurchase intentions. Int. J. Retail Distrib. Manag. 38 (7), 482–496. Dickinger, A., Stangl, B., 2013. Website performance and behavioral consequences: A formative measurement approach. J. Bus. Res. 66 (6), 771–777. Dong, Xi-M., 2012. Index system and evaluation model of e-commerce customer satisfaction. International Symposium on Robotics and Applications (ISRA). Montreal, Canada. October, pp. 439–442. Eskildsen, J.K., Kristensen, K., 2006. Enhancing importance-performance analysis. Int. J. Product. Perform. Manag. 55 (1), 40–60. Finn, A., 2011. Investigating the non-linear effects of e-service quality dimensions on customer satisfaction. J. Retail. Consum. Serv. 18, 27–37. Garver, M.S., 2003. Best practices in identifying customer-driven improvement opportunities. Ind. Mark. Manag. 32 (6), 455–466. Goi, C.L., 2012. A review of web evaluation criteria for e-commerce web sites. J. Internet Bank. Commer. 17 (3), 1–10. Griffin, A., Hauser, J.R., 1993. The voice of the customer. Mark. Sci. 12 (1), 1–27. Hair, J., Anderson, R., Tatham, R., Black, W., 2006. Multivariate Data Analysis. Pearson/Prentice Hall,, Englewood Cliffs, NJ. Hartline, M.D., Wooldridge, B.R., Jones, K.C., 2003. Guest perceptions of hotel quality: Determining which employee groups count most. Cornell Hotel Restaur. Adm. Q. 44 (1), 43–52. Hor-Meyll, L.F., Barreto, M.B., Chauvel, M.A., de Araujo, F.F., 2012. Por que consumidores reclamam de compras online? Braz. Bus. Rev. 9 (4), 133–156. Johnston, R., Fern, A., 1999. Service recovery strategies for single and double deviation scenarios. Serv. Ind. J. 19 (2), 69–82. Kano, N., Seraku, N., Takahashi, F., Tsuji, S., 1984. Attractive quality vs must be quality. J. Jpn. Soc. Qual. Control 14 (2), 39–48. Kim, S., Stoel, L., 2004. Apparel retailers: Website quality dimensions and satisfaction. J. Retail. Consum. Serv. 11 (2), 109–117. Kurt, S.D., Atrek, B., 2012. The classification and importance of ES-Qual quality attributes: An evaluation of online shoppers. Manag. Serv. Qual. 22 (6), 622–637. Lee, G.G., Lin, H.F., 2005. Customer perceptions of e-service quality in online shopping. Int. J. Retail Distrib. Manag. 33 (2), 161–176. Loiacono, E.T., Watson, R.T., Goodhue, D.L., 2007. WebQual: an instrument for consumer evaluation of web sites. Int. J. Electron. Commer. 11 (3), 51–87. Martilla, J.A., James, J.C., 1977. Importance-performance analysis. J. Mark. 41, 77–79. Matzler, K., Bailom, F., Hinterhuber, H., Renzl, B., Pichler, J., 2004. The asymmetric relationship between attribute-level performance and overall customer satisfaction: a reconsideration of the importance-performance analysis. Ind. Mark. Manag. 33 (4), 271–277.
Mikulic, J., Prebezac, D., 2008. Prioritizing improvement of service attributes using impact range-performance analysis and impact-asymmetry analysis. Manag. Serv. Qual. 18 (6), 559–576. Mittal, V., Kumar, P., Tsiros, M., 1999. Attribute-level performance, satisfaction, and behavioral intentions over time: a consumption-system approach. J. Mark. 63, 88–101. Nielsen IBOPE, 2014. Available at: 〈http://www.nielsen.com/br/pt/press-room/ 2014/Numero-de-pessoas-com-acesso-a-internet-no-Brasil-supera-120-mil hoes.html〉. (accessed 13.02.16). Oh, H., 2001. Revisiting importance-performance analysis. Tour. Manag. 22, 617–627. Oh, L.B., Zhang, Y., 2010. Understanding Chinese users’ preference for domestic over foreign Internet services. J. Int. Consum. Mark. 22 (3), 227–243. O’Neill, M., Wright, C., Fitz, F., 2001. Quality evaluation in on-line service environments: an application of the importance-performance measurement technique. Manag. Serv. Qual. 11 (6), 402–417. Okada, S., Porto, R.B., Ricardo, L.F.C., Almeida, M.I.S., 2014. Varejo multicanal e mobilidade do e-consumidor Brasileiro: Um estudo descritivo com dados secundários de 2012 e 2013. In: Proceedings of CLAV – Congresso Latino-Americano de Varejo. Brazil, São Paulo. October, vol. 7. Öz, M., 2012. A research to evaluate the airline companies’ websites via a consumer oriented approach. Afr. J. Bus. Manag. 6 (14), 4880–4900. Parasuraman, A., Zeithaml, V.A., Malhotra, A., 2005. ES-QUAL: a multiple-item scale for assessing electronic service quality. J. Serv. Res. 7 (3), 213–233. Picolo, J.D., Tontini, G., 2008. Análise do contraste da penalidade e da recompensa (PRC): identificando oportunidades de melhoria em um serviço. RAM – Rev. Adm. Mackenzie 9 (5), 35–58. Pokryshevskaya, E., Antipov, E.A., 2013. Importance-performance analysis for internet stores: a system based on publicly available panel data. Higher School of Economics Research Paper No. WP BRP 08/MAN/2013. Available at SSRN: http://dx.doi.org/10.2139/ssrn.2257770. Pokryshevskaya, E.B., Antipov, E.A., 2012. The strategic analysis of online customers’ repeat purchase intentions. J. Target. Meas. Anal. Mark. 20 (3), 203–211. Rababah, O.M., Al-Shaboul, M.E., Al-Sayyed, R., 2011. A new vision for evaluating the quality of e-commerce websites. Int. J. Adv. Corp. Learn. 4 (1), 32–41. Ramanathan, R., 2010. E-commerce success criteria: Determining which criteria count most. Electron. Commer. Res. 10 (2), 191–208. Ribbink, D., Van Riel, A.C., Liljander, V., Streukens, S., 2004. Comfort your online customer: Quality, trust and loyalty on the internet. Manag. Serv. Qual. 14 (6), 446–456. Rolland, S., Freeman, I., 2010. A new measure of e-service quality in France. Int. J. Retail Distrib. Manag. 38 (7), 497–517. Schaupp, L.C., Bélanger, F., 2005. A conjoint analysis of online consumer satisfaction. J. Electron. Commer. Res. 6 (2), 95–111. Seng Wong, M., Hideki, N., George, P., 2011. The use of importance-performance analysis (IPA) in evaluating Japan’s e-government services. J. Theor. Appl. Electron. Commer. Res. 6 (2), 17–30. Slack, N., 1994. The importance-performance matrix as a determinant of improvement priority. Int. J. Oper. Prod. Manag. 14 (5), 59–75. Smith, R., Deppa, B., 2009. Two dimensions of attribute importance. J. Consum. Mark. 26 (1), 28–38. Souitaris, V., Balabanis, G., 2007. Tailoring online retail strategies to increase customer satisfaction and loyalty. Long Range Plan. 40 (2), 244–261. Stiakakis, E., Georgiadis, C.K., 2009. E-service quality: Comparing the perceptions of providers and customers. Manag. Serv. Qual. 19 (4), 410–430. Swaid, S.I., Wigand, R.T., 2009. Measuring the quality of e-service: Scale development and initial validation. J. Electron. Commer. Res. 10 (1), 13–28. Tontini, G., Picolo, J.D., 2010. Improvement gap analysis. Manag. Serv. Qual. 20 (6), 565–584. Tontini, G., Silveira, A., 2007. Identification of satisfaction attributes using competitive analysis of the improvement gap. Int. J. Oper. Prod. Manag. 27 (5), 482–500. Valvi, A.C., Fragkos, K.C., 2012. Critical review of the e-loyalty literature: a purchasecentred framework. Electron. Commer. Res. 12 (3), 331–378. Wolfinbarger, M., Gilly, M.C., 2003. eTailQ: dimensionalizing, measuring and predicting e-tail quality. J. Retail. 79 (3), 183–198. Xing, Y., Grant, D.B., McKinnon, A.C., Fernie, J., 2010. Physical distribution service quality in online retailing. Int. J. Phys. Distrib. Logist. Manag. 40 (5), 415–432. Yen, C.H., Lu, H.P., 2008. Effects of e-service quality on loyalty intention: an empirical study in online auction. Manag. Serv. Qual. 18 (2), 127–146. Zeithaml, V.A., Parasuraman, A., Malhotra, A., 2000. E-service quality: definition, dimensions and conceptual model. Marketing Science Institute Working Paper Series, Cambridge, MA. Zhao, M., Dholakia, R.R., 2009. A multi-attribute model of web site interactivity and customer satisfaction: an application of the Kano model. Manag. Serv. Qual. 19 (3), 286–307.