Journal of Business Research 60 (2007) 365 – 370
Evaluating the efficiency of Internet banner advertisements Ritu Lohtia a,⁎, Naveen Donthu a , Idil Yaveroglu b a
Georgia State University, United States b American University, United States
Received 26 September 2005; accepted 24 October 2006
Abstract In this paper the authors present an approach for measuring the efficiency of banner advertisements. Their approach, using data envelopment analysis (DEA), accommodates multiple inputs and multiple outputs and estimates a relative measure of efficiency. In an illustrative example, the authors evaluate the efficiency of banner advertisements using click-through data and respondent recall and attitude data. © 2006 Elsevier Inc. All rights reserved. Keywords: Internet banner advertisements; Effectiveness; Efficiency; Data envelopment analysis
1. Evaluating the efficiency of internet banner advertisements In 1994, the first banner advertisement was introduced, and as a result, a new realm of advertising emerged. It was then that the idea that the Internet could be used as a marketing communication tool became a reality. In the ten years since, the Internet advertising industry has exploded. By the end of 2004, annual online advertising revenues reached $9.6 billion in the United States, an increase of almost 33% over 2003, of which display advertisements accounted for approximately 19% (PricewaterhouseCoopers, 2005a). Online advertising revenues continue to increase, and are estimated to exceed $12.5 billion for 2005, a 30% increase over 2004 (PricewaterhouseCoopers, 2005b). Measurement, metrics, and accountability are the current mantra in banner advertising. Bruner's (2005) recent ten-year plan for online media calls for a methodology that can incorporate multiple output objectives in the assessment of an advertising campaign's success. This paper illustrates a methodology that allows for the simultaneous use of multiple inputs and multiple outputs to determine the efficiency of an advertisement. In the real world, the performance of any advertising campaign depends on many input and output factors, and thus a methodology that ⁎ Corresponding author. Georgia State University, Marketing Department, P.O. Box 3991, Atlanta, GA 30302-3991, United States. Tel.: +1 404 651 2740; fax: +1 404 651 4198. E-mail address:
[email protected] (R. Lohtia). 0148-2963/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jbusres.2006.10.023
addresses multiple inputs and multiple outputs would be a significant contribution to the literature. Based on real data from an online advertising firm, this paper offers advertisers an alternative method to examine the performance of banner advertisements. This research will help online advertisers develop more efficient advertisements relative to their most efficient prior or current advertising campaigns by offering them specific guidelines on which input or output variables to manipulate for maximum advertising efficiency. Given current economic conditions in which the focus is on extracting maximum productivity from a set of resources, this research is timely and provides helpful guidance to advertisers. Prior evaluation techniques evaluate the effectiveness of banner advertising by 1) examining differences in output variables (e.g., awareness, recall, and intent to buy) one at a time or 2) examining the impact of one input variable on one output variable (e.g., test the impact of animation on recall) or 3) examining the impact of multiple input variables on single output variables (e.g., impact of animation and size on recall or 4) examining the impact of multiple inputs on multiple outputs simultaneously (e.g., the effect of exposure duration, content, background complexity, and viewing mode on unaided recall and recognition). Note that in all of these techniques, the effectiveness of a banner advertisement is not explicitly measured relative to other advertisements. The research conducted thus far suggests various input factors that contribute to greater desired outputs; however, no research has examined the efficiency of the input and output
366
R. Lohtia et al. / Journal of Business Research 60 (2007) 365–370
process. The application of efficiency to the measurement of the performance of banner advertisements adds to the body of knowledge in this area, as well as contributes by offering a new technique that may be useful to practitioners. We introduce Data Envelopment Analysis (DEA) as a tool to measure the efficiency of banner advertisements. 2. Data envelopment analysis DEA is a technique for measuring the relative efficiency of a process or unit characterized by multiple inputs and outputs. DEA basically converts multiple inputs and outputs into a single measure of performance, which is generally referred to as “relative efficiency.” Lines connecting the points of the most efficient units create the efficiency frontier or envelope, similar to production efficiency in microeconomics (Donthu and Yoo, 1998). One distinguishing characteristic of DEA is that it estimates the weights assigned to the inputs and outputs individually for each unit. Unlike regression techniques in which the same weights are applied to all units to produce one output measure, DEA treats each unit individually and estimates the weighs for the input and output variables that maximize the unit's efficiency. DEA was first proposed by Charnes et al. (1978) and it has been applied successfully to many areas (Seiford and Thrall, 1990). In the marketing literature, DEA was initially applied in 1985 (Charnes et al., 1985). More recent marketing applications include efficiency measurements of advertising campaigns (Luo and Donthu, 2001), and sales territory characteristics (Pilling et al., 1999), and benchmarking marketing productivity (Donthu et al., 2005). Although most applications of DEA have been on populations of objects, it has also been used on random samples similar to the one in this research. For example, Xue and Harker (2002) use DEA to illustrate the efficiency of a random sample of Web users. Luo and Donthu (2001) use it to evaluate the inefficiency in the media spending of 100 advertisers. In using DEA to measure banner advertisement performance, we estimate a banner advertisement's performance efficiency by comparing its inputs and outputs with those of all banner advertisements under consideration. The DEA application produces an efficiency frontier that represents the optimal levels of outputs for given levels of inputs. Banner advertisements at the efficiency frontier are considered the most efficient. A banner advertisement is considered efficient (efficiency = 1) if its output is optimal (maximum possible) for its inputs in comparison with the inputs and outputs of all comparable banner advertisements. Banner advertisements whose efficiency is less than 1 are placed inside the frontier. As the number of variables (inputs and outputs) relative to the number of banner advertisements increases, the number of efficient banner advertisements also increases. As the number of efficient banner advertisements increases the results become less meaningful. It is, for example, not useful to know that 25 of the 45 banner advertisements have an efficiency of 1.0. There is then a practical need to distinguish among those 25 efficient advertisements. In those instances super efficiencies are computed. Super efficiency measures the amount of influence an efficient banner advertisement has on the efficiency frontier.
It is computed by running the DEA analysis several times and in each run one of the efficient banner advertisements is not included. As a result the shape (convexity, for example) of the frontier is altered. Super efficiency of a banner advertisement is directly related to this influence on the shape of the frontier. Super efficiency can thus be used to rank order all efficient banner advertisements. DEA has certain advantages over regression approaches. It is considered to be more accurate than regression techniques in identifying efficient and inefficient units and in demonstrating which factors influence efficiency. This is because it considers each observation separately and compares it to its efficient role model advertisements (Charnes et al., 1978). On the other hand, regression approaches may be useful when the general characteristics of performance are of interest for policy analysis or for predicting future behavior of the entire advertising campaign — and not a specific advertisement. When the advertiser or advertising agency is interested in ensuring that the inputs they are using are optimal relative to the outputs being generated or is interested in examining the performance of individual advertisements vis-à-vis other advertisements of the advertiser or in the campaign, DEA is appropriate. DEA can be a useful tool for pretesting advertisements. An advertiser or advertising agency can compute the efficiency of all the advertisements in the campaign and then run only the efficient advertisements. Using multiple inputs and multiple outputs to make this decision may be better than just basing this decision solely on click-through-rate (CTR), especially if the advertiser has multiple objectives for its advertising campaign. For advertisers who would like to understand what specific characteristics make certain advertisements more efficient, DEA, unlike regression approaches, provides rich information through sensitivity analysis at the individual banner advertisement level, which can be used to improve the performance of individual advertisements. By comparing the inputs and outputs of the inefficient banner advertisement with those of the efficient advertisements, we can estimate the amount of inadequacy (slack) in each variable. This information helps advertisers allocate resources more efficiently and improve the performance of a banner advertisement. An inefficient advertisement may become efficient by increasing all outputs by amounts equal to their corresponding slack (i.e., move toward the efficiency frontier vertically) or by decreasing all inputs by amounts equal to their corresponding slack (i.e., move toward the efficiency frontier horizontally) or a combination of the two. 3. Illustration 3.1. Output variables In this research we use three output variables – CTR, attitude towards the ad, and recall – to measure the efficiency of banner advertisements. Although there is no singular industry standard for measuring the success of a banner advertisement, one of the most common metrics is CTR (Chandon et al., 2003). According to the Interactive Advertising Bureau (2004), CTR is the ratio of
R. Lohtia et al. / Journal of Business Research 60 (2007) 365–370
the number of times an advertisement is clicked to the number of advertisement impressions (i.e., the number of times an advertisement is served to a user's browser). Other output variables that are used frequently to examine the effectiveness of advertisements are attitude towards the ad and recall (Stevenson et al., 2000). While CTR may be used when the objective of the advertisement is direct response (and the goal of many advertisers is to bring potential users to their Web site by clicking), attitude towards the ad and recall are used when the objective of the advertisement is branding. The use of these three outputs allowed us to examine the efficiency of advertisements with both direct response and branding objectives. 3.2. Input variables Our objective was to focus on input variables that were advertisement related and therefore could be easily adjusted by the advertiser to improve the efficiency of advertising. We did not focus on variables that represented viewer characteristics (e.g., involvement level, user mode, and attitude toward Web advertising) because an advertiser would not be able to adjust for these to improve the efficiency of advertising. We identified advertisement-related variables, such as incentives, emotional appeals, color, interactivity, animation, and message length that impact the effectiveness of banner advertisements. 3.3. Administration An online ad serving firm provided us with a large sample of randomly selected banner advertisements that represented a wide variety of products and services and all the banner advertisements were of the same size. Such an examination of performance of advertisements across a variety of products and services can be useful to advertisers that have a portfolio of products and would like to understand what makes advertisements successful across different product categories. The advertisements were served on a network of 500 sites. Since the sites were very varied, the online ad serving firm assured us that the demographic composition of these sites was very similar to the Internet average. Also, since the sites were significantly varied, we believe that any context effects should average out across the different sites. Jointly with the online ad serving firm, an online coding tool was developed to code the banner advertisements for use of incentives, emotional appeals, color, interactivity, animation, and message length. We measured incentives by evaluating the banner advertisements for the presence or absence of incentives to click. We initially assessed banner advertisements' use of emotional appeals by capturing a range of positive and negative emotions. Since we found some advertisements used no emotional appeal at all and less than 1% of the advertisements used negative emotions, we defined emotion as a binary variable to capture the use of emotions or the lack thereof. Lohtia et al. (2003) use a similar scale for examining the impact of emotions on banner advertising effectiveness. We measured interactivity by evaluating the banner advertisements for the presence or absence of interactive elements. To
367
assess the impact of color on the efficiency of a banner advertisement, we evaluated the number of colors present. Since there was a lot of subjectivity across the judges in determining the number of colors, we collapsed the number of colors into a low, medium, and high scale similar to that done by Lohtia et al. (2003) to examine the impact of color on banner advertising effectiveness. We conceptualized animation as either present or not present. Message length was calculated by counting the number of words in each advertisement. For animated advertisements, if the frames in the advertisement were linked, the number of words was added to come up with message length. However, if each frame was independent, the number of words was averaged across the frames to determine message length. Five independent judges coded the advertisements. The judges were marketing doctoral candidates that completed a joint training session in which we familiarized them with the coding scheme. Each judge had a unique password to the Web site on which they could view and code the banner advertisements. We instructed the judges to check boxes for each banner advertisement's appeal, Table 1 DEA results Banner ad number
Efficiency with CTR as output
Efficiency with CTR, attitude toward ad, and recall as outputs
Super efficiency with CTR, attitude toward ad, and recall as outputs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
0.53 0.56 1 1 1 0.5 1 0.85 1 1 0.67 1 0.43 1 1 1 0.72 0.59 0.54 0.9 0.66 1 0.64 0.57 0.84 0.82 0.35 0.53 1 0.69 0.57 1 1 1 0.56 0.53 0.51
1 0.68 1 1 1 1 1 1 1 1 0.79 1 0.63 1 1 1 0.63 1 0.74 1 0.64 1 1 0.63 1 1 0.5 0.83 1 0.95 1 1 1 1 0.55 0.79 0.6
1 0.68 3.93 1.35 1.11 1.26 1.08 1.18 1.38 1.03 0.79 1.51 0.63 1.47 1.43 3.8 0.63 1.16 0.74 1.37 0.64 1.21 1.05 0.63 1.11 1.04 0.5 0.83 1.79 0.95 1.59 1.24 1.16 1.51 0.55 0.79 0.6
368
R. Lohtia et al. / Journal of Business Research 60 (2007) 365–370
number of colors, and presence of interactive elements, animation, incentives to click, and message length. To ascertain interjudge reliability, all judges coded a subsample of 100 randomly selected advertisements. For all independent variables, we estimated the interjudge reliability coefficient using Rust and Cooil's (1994) proportional reduction-in-loss reliability measure, which can be evaluated by means of the same criteria as that for Cronbach's alpha (i.e., .70 is acceptable, but .90 is desirable). In this study, all reliabilities were high and in the desirable range. Since the ad serving firm only provided us with CTR as the output variable for the advertisements and no branding related output measures, we created two online surveys to collect data on recall and attitude towards the ad. We randomly selected 42 banner advertisements from the large sample of advertisements given to us by the ad serving firm
and randomly assigned these advertisements to the two surveys. (Most research on banner advertisements examines only a few (ranging from 1 to 45) advertisements (Lohtia et al., 2003).) We also randomly assigned 230 respondents, who were undergraduate students enrolled in a basic marketing class at a major urban U.S. university, to these two surveys. Given that most of these students at this university work full time (and go to school part-time), their demographics were similar to those of the average Internet viewer. 76% of the students were in the 18–24 years age group and 19% were in the 25–34 years age group. The students visited the study Web site and were asked to browse through seven pages. We drew the content of the Web site from popular news and technology sites that students frequently visit. During this process of Web browsing, they were exposed to 21 banner advertisements each.
Table 2 Rankings of banner advertisements Ranking by
Single output Click-through rate
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 a b
Banner number 4 22 33 23 21 19 16 9 2 8 3 35 17 32 15 12 18 29 30 11 14 27 5 7 26 31 20 25 34 28 1 24 37 10 13 6 36
Output/input
Single output, multiple inputs
Multiple outputs, multiple inputs
Multiple outputs, multiple inputs
Recall
Attitude towards the ad
Click-through rate/ message length
Click-through rate = f (inputs)
Efficiency (all outputs) a
Super efficiency (all outputs) b
23 15 12 1 9 11 5 25 27 22 17 20 4 31 3 29 21 16 10 30 19 35 14 8 6 2 7 32 33 36 18 26 28 34 24 13 37
25 3 18 23 11 29 14 7 9 4 12 13 37 6 10 32 17 2 26 31 33 16 34 30 27 24 5 20 22 8 15 21 19 28 1 36 35
4 22 33 23 21 3 9 7 16 14 18 8 19 15 12 35 30 2 17 32 26 1 20 34 5 28 29 37 27 25 11 31 6 10 24 13 36
18 36 2 21 19 24 20 15 14 33 34 35 22 31 4 26 37 28 23 32 1 12 25 11 27 29 9 7 10 30 13 17 8 5 3 6 16
1 3 4 5 6 7 8 9 10 12 14 15 16 18 20 22 23 25 26 29 31 32 33 34 30 28 11 36 19 2 21 13 17 24 37 35 27
3 16 29 31 34 12 14 15 9 20 4 6 32 22 8 18 33 5 25 7 23 26 10 1 30 28 11 36 19 2 21 13 17 24 37 35 27
The first 24 banners were all efficient. The first 24 banners that were efficient are now ranked using super efficiency.
R. Lohtia et al. / Journal of Business Research 60 (2007) 365–370 Table 3 Sensitivity analysis for banner advertisements Actual
Slack
Value if efficient
Banner ad 11 (efficiency = .79) Input 1 Color Input 2 Words Input 3 Animation Input 4 Interactivity Input 5 Incentive Input 6 Emotion Output 1 CTR Output 2 Attitude Output 3 Recall
Variable
3 6 0 0 0 0 .001 2.62 .27
0 0 0 0 0 0 .002 .32 .05
3 6 0 0 0 0 .003 2.94 .32
Banner ad 30 (efficiency = .95) Input 1 Color Input 2 Words Input 3 Animation Input 4 Interactivity Input 5 Incentive Input 6 Emotion Output 1 CTR Output 2 Attitude Output 3 Recall
3 6 1 0 1 0 .003 2.8 .14
1 0 1 0 0 0 0 0 0
2 6 0 0 1 0 .003 2.8 .14
After the respondents finished viewing the Web site, we directed them to an online instrument that asked them a series of questions, including aided recall and attitude towards the banner advertisements they had just seen. We used the scale used by Dreze and Hussherr (2003) to measure aided recall. The respondents were shown the advertisements and asked if they recalled seeing them (on a yes–no scale). To measure attitude towards the ad, we asked the respondents how they would rate the advertisement overall, on a one to five scale anchored by good and bad. Such a scale has been used by Stevenson et al. (2000). We then estimated the average recall rate and attitude towards the ad for each advertisement across all respondents. 4. Results Of the 42 advertisements data on 37 was complete. We performed two sets of DEA on the 37 banner advertisements. The first DEA was run on the 37 advertisements using CTRs provided to us by the ad serving firm and the six input variables — color, emotion, incentives, interactivity, animation, and message length. Of the 37 banner advertisements, 15 were efficient (1.0) — efficiencies of the advertisements ranged from 0.43 to 1.0, with a mean of 0.77. The second DEA was performed on the same 37 advertisements — however this time we used recall and attitude towards the ad as additional output variables (and all six input variables). Of the 37 banner advertisements, we judged 24 to be efficient (1.0) — the efficiencies ranged from 0.50 to 1.0, with a mean of 0.89. Thus we see that as we evaluated the banner advertisements on three outputs vs. one, their efficiencies changed. Since a large number of advertisements were judged to be efficient, we calculated super efficiency for the efficient advertisements for the second DEA model with six input variables and three output variables. The DEA results are presented in Table 1.
369
In order to benchmark DEA rankings we then used some of the previous approaches (e.g., output-bases, ratio-based, regression-based) to rank the banner ads. Table 2 lists the rankings of the banner advertisements using various effectiveness measurements. The first three columns rank the banner advertisements on the basis of just one output (CTR, recall, or attitude towards the ad). In the next column we computed the ranking for the ratio of one output to one input) by taking the ratio of CTR to message length. We created regression rankings in the next column using CTR as the dependent variable and the six input variables as the independent variables. The next column represents the rankings using DEA in which we accommodated all six inputs and all three outputs to compute relative efficiencies which were used to rank the banner advertisements. The last column represents rankings based on the super efficiencies calculated for the 24 efficient advertisements. There can be significant differences in the rankings by the different methods. For example, banner advertisement 21 had a rank of 5 when we focused only on CTR, it had a rank of 5 when evaluated by ratio of CTR/message length and a rank of 4 when evaluated by using CTR as a function of the six input variables using regression. However, when we performed DEA, it had a rank of 31 and its efficiency was not impressive. That is, banner advertisement 21 used a significant amount of inputs and was not one of the most efficient of the 37 advertisements. The Spearman correlations for the rankings based on these different methods were often quite low (range .1 to .7), further indicating that each of these methods gives us different results with respect to the performance of these advertisements. Super efficiency helps distinguish between the efficient advertisements. Banner advertisements 1 and 3 were both efficient (efficiency of 1.0). However, a look at the super efficiencies of all 24 efficient advertisements suggests that banner advertisement 3 is much more efficient (Rank of 1 based on super efficiencies) compared to banner advertisement 1 which had a rank of 24. Thus, banner advertisement 3 was the most efficient of all the efficient advertisements. To illustrate how inefficient advertisements could be made efficient, we performed sensitivity analysis on banner advertisement 11 (Table 3). Banner advertisement 11 had 3 colors and a message length of 6 words. It had no interactivity, animation, incentive, or emotions. It had an efficiency of 0.79. DEA can provide diagnostic information about how this advertisement can improve its efficiency. In order to understand how this could be done, we looked at the results of the sensitivity analysis. The slacks show how the banner advertisement 11 can become efficient. For this advertisement to achieve an efficiency of 1.0, the CTR would have to be higher (from 0.001 to 0.003), the attitude scores would have to be higher (from 2.62 to 2.94) and the recall scores would have to be higher (from .27 to .32). This information has implications for the ad designer and the media planner. Attitude and recall scores may be improved by improved design and CTR may be improved by better placement of the banner ad. While banner ad 11 could become efficient by increasing outputs, sensitivity analysis shows that banner ad 30 can become efficient by decreasing inputs. Banner ad 30 has slacks only in 2 of its inputs. To become efficient (increase efficiency
370
R. Lohtia et al. / Journal of Business Research 60 (2007) 365–370
from .95 to 1.0) banner ad 30 would have to decrease number of colors by 1 and not have any animation. This example illustrates the strength of DEA, in that it provides (1) a fuller picture by measuring all inputs and outputs and (2) a relative measure of efficiency by comparing inputs and outputs of the focal banner advertisement with those of all other banner advertisements under consideration. (3) It also helps to differentiate between the efficient banners by calculating super efficiency and (4) provides diagnostic information for improving the efficiency of the ad. 5. Discussion and conclusion The evaluation of a banner advertisement is a complex task because many different input and output factors affect advertisement performance. Some firms may prefer to evaluate the effectiveness of an advertisement on the basis of outputs only, and others may prefer to understand how certain input variables affect the effectiveness as measured through output variables. Beyond effectiveness, firms may also want to focus on the efficiency of an advertisement relative to other advertisements to ensure that the inputs they are using are optimal relative to the outputs being generated. We have presented a technique (based on DEA) that enables firms to evaluate the efficiency of their banner advertising. Because, in general, advertisers have multiple objectives to achieve from their advertising and such objectives can be reached by using different combinations of inputs, the ability of DEA to allow for incorporation of multiple inputs and outputs into one efficiency measure, makes it a powerful tool. This approach should appeal to managers who want to ensure that certain output goals are met while they use the most optimal inputs. Creating advertisements with different features and characteristics is costly. Therefore, it is important to know which characteristics make an advertisement successful. One area for future research is to examine the impact of placement of an advertisement on the efficiency of banner advertisements. Also, the advertisements in this study were placed on a network of sites. It will be interesting to examine differences in the efficiency of advertisements that are placed on a network vs. a vertical of sites or even individual sites. A firm that decides to use DEA to examine the efficiency of its banner advertisements must decide which input and output variables to use. Our literature review illustrates the many types of input and output variables that can be used. In this research, we illustrate DEA on the basis of inputs and outputs most frequently used in the literature to examine the effectiveness of banner advertising. We recognize that these specific variables may not be of interest to every firm. In this research, while we obtained CTR from the ad serving firms, the other output variables were obtained from a different sample since the ad serving firms did not collect any other output measures. If future researchers would like to use actual CTR data
along with other output measures to evaluate the efficiency or effectiveness of their advertisements, they need to find ad serving firms that collect data on multiple output variables. Since only a few firms do this, researchers may have no choice but to use multiple approaches to collect data. For some firms, DEA may be a fairly complex methodology and one that is difficult to communicate to employees. In such situations, firms should use it as a supplementary tool to measure advertising performance. Incorporating DEA may provide a fuller picture of the performance of banner advertisements by adding efficiency considerations to the more common approach of measuring effectiveness. References Bruner Rick E. The decade in online advertising: 1994–2004. http://www. doubleclick.com/us/knowledge_central/documents/RESEARCH/dc_ decaderinonline_0504.pdf2005. Chandon Jean L, Chtourou Mohamed S, Fortin David R. Effects of configuration and exposure levels on responses to Web advertisements. J Advert Res 2003;43(2):217–29. Charnes A, Cooper WW, Rhodes E. Measuring the efficiency of decision making units. Eur J Oper Res 1978;2(6):429–44. Charnes A, Cooper WW, Rhodes E, Golany B. A developmental study of data envelopment analysis in measuring the maintenance units in the U.S. air forces. In: Thompson R, Thrall RM, editors. The annals of operations research, vol. 2. Norwell, MA: Kluwer; 1985. p. 96–112. Donthu Naveen, Hershberger Edmund, Osmonbekov Talai. Benchmarking marketing productivity using data envelopment analysis. J Bus Res 2005;58(11):1474–82. Donthu Naveen, Yoo Boonghee. Retail productivity assessment using data envelopment analysis. J Retail 1998;74(1):88–104. Dreze Xavier, Hussherr F. Internet advertising: is anybody watching? J Interact Market 2003;17(4):8–24. Interactive Advertising Bureau. Interactive audience measurement and advertising campaign reporting and audit guidelines. http://www.iab.net/standards/ pdf/2292%20IAB%20spreads.pdf. 2004. Lohtia Ritu, Donthu Naveen, Hershberger Edmund K. The impact of design elements on banner advertising click-through rates. J Advert Res 2003;43(4):410–8. Luo Xueming, Donthu Naveen. Benchmarking advertising efficiency. J Advert Res 2001;41(6):7–18. Pilling Bruce K, Donthu Naveen, Henson Steve. Accounting for the impact of territory characteristics on sales performance: relative efficiency as a measure of salesperson performance. J Pers Sell Sales Manage 1999;19(2):35–45. PricewaterhouseCoopers. IAB Internet advertising revenue report. http://www. iab.com/resources/adrevenue/pdf/IAB_PwC_2004full.pdf. 2005. PricewaterhouseCoopers. Internet advertising revenues estimated to exceed $12.5 billion for full year 2005. http://www.iab.net/news/pr_2006_03_01. asp. 2005. Rust Roland T, Cooil Bruce. Reliability measures for qualitative data: theory and implications. J Mark Res 1994;31(1):1–14. Seiford Lawrence M, Thrall Robert M. Recent developments in DEA. J Econom 1990;46(1/2):7–38. Stevenson Julie S, Bruner II Gordon C, Kumar Anand. Webpage background and viewer attitudes. J Advert Res 2000;40(1/2):29–34. Xue Mei, Harker Patrick T. Customer efficiency: concept and its impact on ebusiness management. J Serv Res 2002;4(4):253–67.