Audience Measurement

Audience Measurement

Audience Measurement James G Webster, Northwestern University, Evanston, IL, USA Ó 2015 Elsevier Ltd. All rights reserved. Abstract Audience measurem...

81KB Sizes 1 Downloads 82 Views

Audience Measurement James G Webster, Northwestern University, Evanston, IL, USA Ó 2015 Elsevier Ltd. All rights reserved.

Abstract Audience measurement entails regular assessments of the size and composition of media audiences. These are presented as syndicated reports used by media industries. Measurement services use questionnaires, diaries, meters, and/or servers to quantify people’s exposure to mass media. All methods of data collection are affected by error, including sampling, nonresponse, and response error, as well as error in the production of reports. Audience measurement is a specialized kind of market research that is deeply enmeshed in the media systems of most industrialized nations. These measures, which include broadcast audience ratings, play an important role in shaping popular culture.

Audience measurement, as the term is commonly used, refers to regular assessments of people’s media use. These data often result in estimates of the size and composition of audiences, like television audience ratings, that enable advertiser supported media. Audience measurement can be thought of as a specialized form of market research that is deeply enmeshed in the media systems of most industrialized nations (Anand and Peterson, 2000). It quantifies popularity, guides the allocation of billions of dollars in advertising worldwide and, in doing so, shapes the media environment and popular culture.

Uses of Audience Measurement The major impetus for audience measurement is advertising. By the late nineteenth century, most newspapers and mass circulation magazines had begun to sell space to advertisers. The value of space was determined largely by the number of readers who would see each publication, and by extension each ad. Paid circulation served as a reasonable surrogate for the size of the readership. Many publishers, however, made inflated claims about their circulation, undermining the confidence of advertisers. To make print media credible, in 1914 the industry created the Audit Bureau of Circulations, an independent organization that verified circulation. To this day, such audits continue in the United States and many other countries around the world. The growth of radio in the 1920s presented new challenges. Unlike print, there were a few traces of the radio audience, save for fan mail or the sheer number of receivers being sold. For radio to be a viable advertiser-supported medium, the industry needed to authenticate the size and composition of audiences. The problem was especially pressing in countries like the United States and Australia that relied on advertising to fund broadcasting (Balnaves et al., 2011). To provide reliable estimates of the number of people listening to specific programs, the industry sponsored audience surveys. By the 1930s independent research firms had begun to produce what were called ‘ratings’ (i.e., percentages of the population tuned to a particular program or station). These made the audience visible and allowed advertisers to buy spot announcements in radio with a degree of certainty. When television and other forms of electronic media emerged later in the century, similar

International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 2

mechanisms for measuring the size and composition of audiences were quickly put in place (Beville, 1988). By the 1980s, as advertiser support became more widespread, most European countries put similar systems in place (Gunter, 2000). Today, advertising is a multibillion dollar business. Audiences are bought and sold like commodities. In many countries, these transactions are made possible by a dominant firm that supplies measurements for a particular medium. Advertising expenditures are typically guided by the cost of reaching various audience segments. In a business world increasingly interested in target marketing, research firms have been called upon to produce ever finer demographic distinctions, as well as data on lifestyles, levels of engagement, and product purchases. Curiously, audience ratings describe something that has already happened, while most advertising expenditures are made in anticipation of audiences that have yet to materialize. This odd system works because, in the aggregate, media audiences are quite predictable. For example, advertisers and networks know the total number of people who are likely to watch television at any time during the year, and negotiate ad prices on that basis. The stability of mass behavior has also allowed researchers to identify various ‘law-like’ features of audience formation (e.g., Goodhardt et al., 1987; Webster and Phalen, 1997). Web use appears to follow similar law-like patterns (e.g., Easley and Kleinberg, 2010), although advertisers often pay after the fact as a function of audience impressions or ‘click-throughs’ (Turow, 2011). The fact that advertising is a major source of revenue for most forms of media (e.g., print, radio, television, the Internet, etc.) has embedded audience measurement in the operation of these industries. Obviously the system places a premium on audiences that will be attractive to advertisers, either by virtue of their sheer size or other desirable characteristics. Content, therefore, is evaluated with an eye toward its audience-making potential. Moreover, broadcasters pay careful attention to how audiences ‘flow’ across an entire schedule of programming, relying on structural factors to deliver people to their program offerings. Web publishers exploit hyperlinks and recommendation systems in a similar way. In all of these endeavors, audience measurement is needed to manage and sell the audience. In fact, a good case can be made that audience measurement is a necessary condition for the emergence of any form of advertiser supported media (Ettema and Whitney, 1994).

http://dx.doi.org/10.1016/B978-0-08-097086-8.95009-8

211

212

Audience Measurement

Even in social systems not fully committed to advertising, audience measurement can become a fixture. Public service broadcasting around the world must often justify its existence by demonstrating that it has an audience (Ang, 1991; Gunter, 2000). Here the techniques of audience measurement are similar to those in fully commercial systems, although they may involve the use of more ‘qualitative’ measures that assess the appeal or usefulness of media materials (Gunter, 2000). Similarly, government officials concerned with good or bad media effects often relate these to people’s exposure to good or bad content (Webster and Phalen, 1997).

Measurement in the Digital Media Environment Since the 1990s, there has been explosive growth in digital media. This has affected older media, like television, by greatly increasing the number of channels and ‘on-demand’ options available to viewers. It has also created new ways for users to find, view, download, and share media over the digital networks. These developments pose a challenge to traditional methods of measurement by fragmenting audiences and enabling anywhere anytime media consumption. Prior to the arrival of digital media, virtually all forms of audience measurement depended on drawing a sample of users to estimate the population. This is a ‘user-centric’ approach to measurement. As competition for audiences has grown, each new channel or Web site has claimed a piece of public attention. As a result, audiences, which used to concentrate on a few outlets, have been fragmented into smaller and smaller groups. This, coupled with the advertiser’s desire to reach more narrowly defined audiences, has taxed the ability of samples to keep pace and increased the likelihood of sampling error – a topic addressed below. Digital media have also enabled the growth of nonlinear technologies, like digital video recorders (DVRs), video-ondemand (VOD), and Web sites that allow users to stream or download media at will. Further, many of these are available on multiple platforms like computers, tablets, and smart phones, allowing users to fetch media anywhere at any time. As a result, it is no longer possible to rely exclusively on scheduling information to determine what media each person is exposed to. These challenges are mitigated, to some extent, by harnessing the servers that power digital media. They include the computers that (1) provide content to users, by delivering web pages, videos, or other media; (2) enable access through the Internet service providers; and (3) place display ads across multiple Web sites to deliver impressions. In the process, servers can record each user action, which forms the basis for a new way to measure audiences. This is a ‘server-centric’ approach to measurement. Each approach, the user-centric method and the newer server-centric strategy, has strengths and weaknesses. Generally speaking, user-centric methods do a better job of assessing the traits of individual users (e.g., age, gender, etc.) and allowing measurement firms to track media consumption across platforms (e.g., print, broadcast, Web, mobile, etc.). However, since they use samples it is difficult to accurately estimate the audience for smaller outlets without undue sampling error.

Server-centric methods avoid the problem of too small samples because they see the actions of everyone who uses the server. In theory they provide a census of media use, although in practice all members of the relevant population may not be visible to the server. More importantly, they cannot see who is visiting the server, unless visitors have provided that information. While that is sometimes the case, for example with ‘social media’ sites, it is often difficult to accurately describe the characteristics of the users. Moreover, servers cannot see beyond their associated digital networks, making it difficult to assess cross-platform media use. Measurement companies have devised solutions to many of these problems, though none is without its own drawbacks. Often, data are ‘fused’ to create a hybrid of user- and servercentric information in an attempt to capture the best of both worlds. Unfortunately, true ‘single source’ systems that measure all the desirable variables (i.e., user traits, product purchases, moment-to-moment multi-platform media use) across a very large sample or population are costly, can impose undue burdens on respondents, and are therefore rare.

Methods of Audience Measurement Methods of audience measurement are closely tied to the use of user-centric or server-centric approaches. The latter, more traditional approach, offers greater flexibility about who is studied and how they are measured. As a type of survey research, user-centric measurement confronts issues common in sampling and statistical inference. Its distinctiveness comes from the methods it uses to measure the behavior of audience members. One of the three techniques is typically employed: questionnaires, diaries and meters. Server-centric methods limit who can be studied and confine measurement to actions that can be tracked or inferred from server usage. The first regular surveys of radio audiences were conducted using telephone interviews. Respondents were called at home and asked either what they were currently listening to (i.e., telephone coincidental method) or what they had listened to in the recent past (i.e., telephone recall method). Both techniques are used today. In fact, telephone coincidentals were historically regarded as the most accurate way to measure broadcast audiences and served as the standard against which other methods were judged (Beville, 1988). The advent of radio audience measurement forced print media to go beyond circulation as the sole indication of readership. Because a single copy of a magazine or a newspaper might well be read by more than one person, circulation alone seemed to understate the audience for print. The industry eventually adopted survey research to better estimate readership. Personal interviews were conducted in which respondents were shown stripped down copies of various magazines and asked if they recalled reading them. Alternatively, to economize on time and increase the number of publications that could be assessed, respondents were asked to sort cards containing magazine logos and indicate which of these were read. Today, more and more survey work is being done over the Internet. Respondents, who are either recruited or volunteer, fill out to questionnaires online. This method provides some flexibility in questionnaire design, allowing researchers to

Audience Measurement

introduce visual materials or scripted clarifications. Using the Internet to administer questionnaires can offer fast turnaround times and be relatively inexpensive. However, since these techniques are limited to those who have Internet access and are willing to cooperate, making inferences to larger populations must be done with caution. Diaries are small booklets in which respondents keep a log of their radio or television usage throughout the day. Typically, diaries are kept for a week, though time periods of different durations are possible. Diaries are a relatively inexpensive data collection technique and, properly filled out, contain a wealth of information. However, many people are reluctant to accept a diary or are unwilling or are unable to make accurate entries. In a world of push-button radios, multi-channel television, and remote control devices, completing a diary can be a burdensome task. Diaries tend to overrepresent the use of popular outlets or programs that come easily to mind and underrepresent the number of sources that a diary-keeper actually uses. Nonetheless, diaries are a common measurement technique for producing local radio and television audience reports (Webster et al., 2014). Many of the problems associated with diaries can be solved by the use of meters. These devices attach to a receiver and automatically record the source to which the set is tuned. By the early 1940s, Arthur Nielsen had developed such a device and had placed it in a panel of American households to produce national radio network ratings. Household meters were eventually adapted to the new medium of television. While household meters provide a relatively accurate measure of set usage, they are expensive and offer no information about who is actually using the set. With advertisers increasingly concerned about demographics, the latter is a serious shortcoming. To provide information about viewers, meters were adapted so that people could identify themselves by pressing a button. These ‘peoplemeters’ were put into service in the United States and most European countries in the 1980s (Webster et al., 2014). Newer peoplemeters are capable of detecting what content is being viewed, eliminating reliance on program schedule information. They do this by detecting an electronic code embedded in the signal, sometimes called a ‘watermark,’ or by actively comparing the digital signature of the signal to a library of digital media. Today, peoplemeters are the preferred method of television audience measurement in much of the world. Conventional peoplemeters are attached to television sets, and are therefore unable to measure out-of-home media use (e.g., radio) or cross-platform use. To remedy that problem, measurement companies also use portable peoplemeters (PPMs). These are either built into pager-like devices or wristwatches that respondents wear throughout the day. They ‘listen’ for an inaudible code (i.e., a watermark) in the audio portion of a broadcast signal, and credit listenership accordingly. PPMs are then docked at night to be recharged and cleared of the information they contain. This technology is used to measure radio audiences in larger markets. It can also capture other forms of media use (e.g., TV viewing) and so holds promise for studying cross-platform behaviors. With more and more media use migrating to the Web, measurement firms also meter personal computers. Respondents are asked to load measurement software on their own computers, which records the various pages that the user visits

213

and reports that to the research company. These data are aggregated to describe the number and type of people who visit the Web sites. They also support analyses of how audiences flow from one site to another. Like other exercises in metering, this is a user-centric approach that depends upon the cooperation of sample members. As noted above, a powerful new way to measure Internet use is to monitor servers. These computers see and record the actions of all those using the server. Learning something about the identity of the visitor is the biggest challenge in server-centric measurement. A variety of techniques are used. Servers will note the IP addresses of the visitors. That may reveal something about their location. Identities can sometimes be inferred by the kinds of materials a user accesses. Most importantly, servers routinely place ‘cookies’ on the browsers of visitors. These are small computer files that contain a record of what a user did on the site, and serve to identify them as repeat customers. Cookies can also be used to track users across Web sites and target advertising. These practices raise questions about user privacy and are the subject of regulation that varies from country to country. Servers also manage the traffic on most broadband distribution systems. Some firms aggregate data from Internet service providers to estimate Web site traffic. Cable and satellite systems often stream digital video signals into subscribing households, which are managed by ‘set-top-boxes’ (STBs) attached to TV sets. When properly programmed, STBs can record channel usage. They become the functional equivalent of household meters, except that they are present in millions of homes. The census-like scope of the sample eases the problem of measuring fragmented audiences. When STBs are used to estimate television audiences the data must be carefully weighted to account for the fact that many homes and television sets are not attached to these digital networks.

Sources of Error in Audience Measurement There are four sources of error in audience measurement: sampling, nonresponse, response, and production. The first three are, again, problems commonly associated with survey research. The last includes a variety of issues that have to do with bringing a saleable research product to market. Most syndicated audience measurement is based on some form of probability sampling. Abundant digital media and the ensuing audience fragmentation, make sampling error a persistent problem. In world where some media subsist with a fraction of 1% of the total audience, even a sample with tens of thousands of respondents, quickly becomes too small to accurately estimate the audience for smaller outlets or unpopular products. The error may be random, but it is unacceptably high relative to the estimate. Unfortunately, the cost of some measurement devices (e.g., meters) and the diminishing marginal returns associated with increases in sample size have forced difficult compromises on sample size and allowable levels of sampling error. Often, unpopular offerings are simply ‘below minimum standards’ for reporting. Servers that monitor millions of users can help provide the needed granularity, but with the limitations noted above. There are also potential problems of response. Written forms of feedback (e.g., self-administered questionnaires,

214

Audience Measurement

diaries, etc.) often suffer from low rates of response. Further, as people become increasingly leery of telemarketers, telephone survey completion rates are threatened. All these lead to concerns about nonresponse (i.e., the people providing data are systematically different from those who don’t). Even among those who agree to cooperate, there may be problems with the quality of their responses (e.g., response error). Diaries are particularly prone to response errors, as the flood of modern media simply overwhelms the reporting capabilities of even the most conscientious respondent. These errors can be nonrandom. For example, because diaries systematically favor popular media, they can produce reliable results of questionable validity. The accuracy of self-reported media use has become a contentious issue in some academic circles (e.g., Dillipane et al., 2013; Prior, 2013). The information collected by interviews, diaries, meters, and servers can be thought of as raw material that must undergo a production process to become a saleable product. Data must be coded and checked for obvious errors. Missing values are occasionally imputed using a process called ascription. New information must be added. For example, the characteristics of server users might be inferred by fusing it with other data sets. Responses from underrepresented or overrepresented groups are often assigned a mathematical weight in a process called sample balancing. Finally, data must be aggregated and projected to create population estimates. Such production processes can introduce errors. Sometimes, it is human error. Sometimes it is applying a less than perfect mathematical correction to the data (Webster et al., 2014).

Institutional Aspects of Audience Measurement In most developed countries, audience measurement is a business that sells research reports or data to multiple subscribers. Sometimes, as is the case in the United States, firms compete openly for clients. In many countries, however, the consumers of audience measurement create a ‘joint industry committee’ (JIC) that specifies what measurement services are required and awards a multiyear contract to a sole supplier. Under either system, it is common to see one company become the dominant supplier of the ‘currency’ within a medium. As such, their reports take on the air of quasi-public, official documents. Though various errors in the data mean the numbers are not nearly as ‘hard’ as they appear, the organizations that use them have little alternative but to treat them as such. And an enormous amount rides on those numbers. The process of allocating billions of dollars in advertising and programming resources is informed, some would say tyrannized, by audience measurement. This affects the kinds of media that are available and, more generally, how entire media systems operate (Ang, 1991; Napoli, 2011; Turow, 2011). The institutional significance of audience measurement has not gone unnoticed. In the 1960s the United States Congress held hearings into the quality and consequences of the American broadcast ratings industry. Public scrutiny induced the industry to create an ongoing council to audit and accredit ratings research methods, as well as sponsor industry-wide methodological research (Beville, 1988). JICs are also a forum for reviewing research methods. For these reasons, and because

audience metrics are used by powerful institutions with competing interests, it is probably the case that no supplier of audience measurements can drift too far from truthful reporting without sowing the seeds of its own destruction. Even so, much about audience measurement is negotiable and left to industry consensus. The very definition of exposure to media has changed over time and from method to method. For example, in the US the currency for valuing television advertising used to be program ratings. Advertisers, however, were more interested in how many people saw their ads. So in 2007, the currency became commercial ratings plus 3 days of replays, called ‘C3s.’ Today servers capture more information, which can be parsed in so many ways, that industry consensus about the definition and utility of metrics is a source of ongoing discussion. Social media, for example, allow research firms to track what people say about programs and products. These ‘social listening’ data can form the basis of metrics on ‘engagement,’ though just how they will be used remains to be seen. New metrics, once they are adopted, can alter the way media industries operate (Napoli, 2011). By its very nature, audience measurement is reductionist. Critics have rightly charged that it fails to capture the idiosyncratic, lived experiences of media users. Conventional forms of audience measurement tell us little about what sense people make of the media use and how that use affects them. Some also charge that measurement turns people into commodities by lumping them into categories where one person is the functional equivalent of all others in the group. These groups are then sold to advertisers at some ‘cost-per-thousand.’ At best, these practices are dehumanizing. At worst, it is argued that the entire enterprise manipulates and ultimately victimizes audiences (Ang, 1991; Turow, 2011). Alternatively, audience measurement can be seen as exercising a democratizing influence on popular culture. Material that is successful in the marketplace proliferates, while content that is unpopular fades away. While this notion of cultural democracy is oversimplified, it has been argued that measurement empowers audiences by casting them in a form to which institutions can and do respond (Ettema and Whitney, 1994; Webster and Phalen, 1997). Newer social media, which allow people to express their likes and dislikes, share links, and recommend content, extend the impact of measurement. These activities are supported and rendered visible by ‘user information regimes,’ such as search engines and recommender systems, that aggregate user data. The metrics they produce affect both the production and consumption of media (Webster, 2014). One way or another, it can certainly be said that audience measurement helps shape popular culture.

See also: Advertising: General; Audiences, Media; Mass Media, Political Economy of; Media Effects.

Bibliography Anand, N., Peterson, Richard A., 2000. When market information constitutes fields: sensemaking of markets in the commercial music industry. Organization Science 11 (3), 270–284. Ang, Ien, 1991. Desperately Seeking the Audience. Routledge, London.

Audience Measurement

Balnaves, Mark, O’Regan, Tom, Goldsmith, Ben, 2011. Rating the Audience: The Business of Media. Bloomsbury, London. Beville, Hugh M., 1988. Audience Ratings: Radio, Television, Cable, rev. ed. Erlbaum, Hillsdale, NJ. Dilliplane, Susanna, Goldman, Seth K., Mutz, Diana C., 2013. Televised exposure to politics: new measures for a fragmented media environment. American Journal of Political Science 57 (1), 236–248. Easley, David, Kleinberg, Jon, 2010. Networks, Crowds, and Markets: Reasoning about a Highly Connected World. Cambridge University Press, Cambridge, UK. Ettema, James S., Whitney, D. Charles, 1994. Audiencemaking: How the Media Create the Audience. Sage, Thousand Oaks, CA. Goodhardt, Gerald, J., Ehrenberg, Andrew S.C., Collins, Martin A., 1987. The Television Audience: Patterns of Viewing, second ed. Gower, Westmead, UK. Gunter, Barrie, 2000. Media research Methods: Measuring Audiences, Reactions and Impact. Sage, London. Napoli, Philip M., 2011. Audience Evolution: New Technologies and the Transformation of Media Audiences. Columbia University Press, New York. Prior, Markus, 2013. The challenge of measuring media exposure: reply to Dilliplane, Goldman, and Mutz. Political Communication 30 (4), 620–634. Turow, Joseph, 2011. The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth. Yale University Press, New Haven, CT.

215

Webster, James G., Phalen, Patricia F., 1997. The Mass Audience: Rediscovering the Dominant Model. Erlbaum, Mahwah, NJ. Webster, James G., Phalen, Patricia F., Lichty, Lawrence W., 2014. Ratings Analysis: Audience Measurement and Analytics, fourth ed. Routledge, New York. Webster, James G., 2014. The Marketplace of Attention: How Audiences Take Shape in a Digital Age. MIT Press, Cambridge, MA.

Relevant Websites http://www.gesis.org/en/services/data-collection/gesis-panel/. http://www.gfk.com/. http://www.kantarmedia.com/. http://www.lissdata.nl/lissdata/. http://mediaratingcouncil.org/. http://nielsen.com/us/en.html. http://www.nielsen.com/content/corporate/global/en.html. http://www.telecontrol.ch/typo3/.