Making a case for speech analytics to improve customer service quality: Vision, implementation, and evaluation

Making a case for speech analytics to improve customer service quality: Vision, implementation, and evaluation

International Journal of Information Management xxx (xxxx) xxx–xxx Contents lists available at ScienceDirect International Journal of Information Ma...

656KB Sizes 0 Downloads 24 Views

International Journal of Information Management xxx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

International Journal of Information Management journal homepage: www.elsevier.com/locate/ijinfomgt

Making a case for speech analytics to improve customer service quality: Vision, implementation, and evaluation Scott Scheidta, Q.B. Chungb, a b



AmerisourceBergen, 1300 Morris Drive, Chesterbook, PA 19087, USA Villanova University, 800 Lancaster Avenue, Villanova, PA 19085, USA

A R T I C L E I N F O

A B S T R A C T

Keywords: Customer service quality Innovation Speech analytics Key performance indicators Workforce management Customer experience

Firms operating in highly competitive markets must find ways to deliver customer value beyond offering competitive prices. Providing superior customer service in such environments becomes a strategic initiative because it can create a competitive advantage by fostering customer loyalty, which can also help ease pressure on profit margins and secure continued revenue flow. In this case study we report a case of utilizing speech analytics to improve customer service quality at a call center of a pharmaceutical supply chain service provider in the U.S. We first describe the strategic rationale behind enhancing customer service quality, followed by the implementation of a quality management program using a novel approach of speech analytics. We then present a longitudinal study that evaluated customer service performance using the data gathered from a team of 120 customer service agents during an 8-month period. Two categories of key performance indicators were established and measured, namely “workforce management” metrics and “customer experience” metrics, which served as the primary indicators in the analysis of the level of success in attaining three strategically identified performance goals to improve customer service quality.

1. Introduction In this case study, we present a strategic initiative that examines the use of speech analytics to improve customer service quality at call centers, or contact centers, of a company in the U.S. healthcare industry, Crossroads WellNet (pseudonym). This is a worthwhile endeavor at Crossroads WellNet because it attempts to determine whether or not speech analytics tools can be effective in improving the quality of customer service, which is believed to be a key differentiator and strategic lever for firms competing in highly competitive markets. In general, profit margins in these markets tend to be relatively slim, and firms must seek ways to deliver customer value beyond offering competitive prices. It is customary that customers in such markets continually switch from one vendor to the next to gain small savings on cost of goods. Facing this challenge, firms, including Crossroads WellNet, have recognized customer service as a source of competitive advantage. With the latest advances in computing power to capture large amounts of data and create insights using analytics, companies are attempting to improve their customer service quality by deploying customer interaction analytics software at contact centers to enable them to leverage insights from the customer interaction data in order to



deliver superior customer service. As will be discussed, building a new customer service model with sophisticated digital technology to solve customer service issues is a complicated matter that has significant impact on individual workers and the business as a whole. This paper is organized in the following manner. We first describe the vision that served as the foundation of the endeavor, regarding the improvement in customer service quality as a strategic imperative, particularly in the context of customer-facing call center operations. In the succeeding section, we provide a detailed description of how the Customer Service Quality Improvement Program was implemented using a relatively new technology solution called speech analytics. Then we present the outcomes of the Program by describing the evaluation methodology as well as the analysis of the results. The remaining two sections are devoted to discussions of lessons learned and conclusions. 2. The vision: the imperatives of customer service Given the competitive landscape and the strategic opportunity that exists in the sphere of customer service, firms are challenged with finding the most effective method to improve the quality of customer service in such a way that not only improves operational efficiencies but also delivers value that distinguishes itself in the marketplace. If

Corresponding author. E-mail address: [email protected] (Q.B. Chung).

https://doi.org/10.1016/j.ijinfomgt.2018.01.002 Received 8 November 2017; Received in revised form 3 January 2018; Accepted 4 January 2018 0268-4012/ © 2018 Elsevier Ltd. All rights reserved.

Please cite this article as: Scheidt, S., International Journal of Information Management (2018), https://doi.org/10.1016/j.ijinfomgt.2018.01.002

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

done well, it will reduce cost to the firm while creating a customer experience that attracts and maintains customers. In the past decade, research in contact centers has intensified as a result of the growing proportion of the population they employ, and due to the ways in which they are transforming customer service interactions (Armistead, Kiely, Hole, & Prescott, 2002). The literature supports the idea that contact centers are playing an increasingly strategic role within service industries, ranging from use of relatively simple customer information and direct sales, to more sophisticated service transactions and customer service functions. Thus, the importance of customer care at contact centers has shifted from an operational tool to a strategic tool in terms of customer relationship management (Kantsperger & Kunz, 2005).

was measured as the willingness of a customer to remain with the current vendor. Today, Customer Service Agents (CSAs) at contact center are required to provide an unprecedented level of one-to-one customer service that requires many associated “soft skills.” Most training programs deal with hard skills that include customer interaction efficiency, network interaction, system operations, and enhancing production knowledge. But it is equally important to train agents on soft skills such as customer empathy that may make the agent a competitive differentiator. Training CSAs on these soft skills can help a contact center think and act in a way that is more “customer centric” (Bordoloi, 2004). Since call centers are taking on an increasingly strategic role in business, providing a critical link in the value chain that connects customers to the firm, the potential implications for this study of the case at Crossroads WellNet are significant and numerous. Theoretically, this study implies that businesses across different industries can use customer interaction analytics technology to improve and expand their value proposition to gain strategic advantage in the market in ways that have not been done before. This insight could prompt firms to search throughout their organizations and across different industries to uncover new opportunities to deliver previously untapped customer value. Since the focus of this study is on the use of technology to improve the quality of customer service, the practical implication of this study will offer insights into how technology can help centralize and standardize customer service across multiple geographic locations and consistently deliver high quality customer service throughout the enterprise.

2.1. Strategic significance of contact centers Providing quality customer service has become critical for service organizations (Lovelock & Wirtz, 2004), while the intensified competition among industry rivals and the growing consumer power have eroded traditional product-based and service-based differentiation, forcing firms to seek new, more durable forms of competitive advantage. Many business and IT leaders see customer service—sometimes referred to as the “customer experience”—as a sustainable source of competitive differentiation (Thompson & Sorofman, 2015). Provision of quality service can increase organizational sales (Cronin & Taylor, 1992) and profitability (Rogelberg, Barnes-Farrell, & Creamer, 1999). Therefore, contact centers play a strategic role in the market value chain of many organizations. The literature recognizes the strategic significance of call center operation in a variety of business contexts, such as continued provision of health services and telemedicine through knowledge preservation among agents (Farshchian, Vilarinho, & Mikalsen, 2017; Procter et al., 2016), optimizing operational efficiency by treating the operation as sophisticated queuing systems (Liang & Luh, 2015; Yu, Gong, & Tang, 2016), and the role of English as a language of offshore call center operation (Boussebaa, Sinha, & Gabriel, 2014). The call center environment can be viewed as a sociotechnical system in which a range of information and communication technologies is used both to drive the agents’ work pace and to monitor the agents’ work output, in order to maximize efficiency (Belt, Richardson, & Webster, 1999). To varying degrees, contact centers need to balance conflicting principles of standardization of processes versus customization of products (Frenkel, Tam, Korczynski, & Shire, 1998), as well as the principles of brevity of calls versus quality of customer service (Stuller, 1999).

3. The implementation: improving call center operation with speech analytics A Customer Service Quality Improvement Program (CSQIP, or Program, hereafter) is a strategic initiative at Crossroads WellNet, which enables a call center to continuously improve the quality of customer interactions through technology, training programs, and individual coaching. The Program implements a proactive approach using analytics to manage customer experience and continuously monitor and improve performance against key performance metrics.

3.1. Traditional vs. new implementation methods The traditional method for customer service quality improvement involved monitoring a fixed number of random customer-agent interactions per period using evaluations forms, followed by face-to-face coaching sessions. With the advent of call recording and speech analytics, however, CSQIPs are increasingly able to monitor and analyze larger amounts of data as well as greater varieties of data. Contact center quality assessment teams and managers are now able to not only monitor but also mine data from a variety of customer interactions such as speech, email, chat, and social media. The Programs use analytics functionality to help a relatively small group of managers improve the overall contact center performance involving hundreds of agents and potentially provide a wide range of valuable business insights.

2.2. Service quality at contact centers Service quality can be defined as the conformance to customer requirements in the delivery of a service. Since quality is capable of being engineered into manufacturing processes using statistical quality control processes, progress in manufacturing quality control has evolved relatively rapidly (Garvin, 1983). However, the measurement of quality in service delivery has proven to be more difficult. Services tend to be performance oriented, thus making precise specifications to a uniform quality more difficult to implement and measure (Kettinger & Lee, 1994). Service quality has been shown to result in significant benefits, such as profit level increases, cost savings, and increased market share for firms (Parasuraman, Zeithaml, & Berry, 1985). Firms assign considerable significance to service quality as evidenced by firms using service quality to strategically position themselves in the market (Brown and Swartz, 1989). Although the analysis of the correlation between service quality and post hoc decisions is limited, service quality has been shown to affect purchase intentions (Cronin & Taylor, 1992; Sullivan & Walstrom, 2001). Particularly, Zeithaml et al. (1996) report a strong influence of service quality on customers’ behavioral intentions, which

3.2. Roles and responsibilities Today the Programs can be implemented in a variety of ways, assigning different roles and responsibilities to complete the core tasks of the Program. Typically, one team is responsible for operating the customer interaction analytics system, monitoring and scoring telephone calls and other customer interactions according to key quality performance metrics. Those results are evaluated on a regular basis and are used to provide customer service insights to the managers who are responsible for managing the performance of individual call center agents. 2

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

(e.g., Müller, Debortoli, Junglas, & vom Brocke, 2016; Shim, Koh, Fister, & Seo, 2016) is just one part of more complex service quality improvement and customer engagement initiatives that aim to analyze a variety of communication channels that, in addition to the telephone, also include email, chat, and social media. Strategy-minded firms have joined forces with research institutions and universities to harness the benefits of speech analytics (e.g., Kane, 2017). As organizations strive to evolve from being a contact center that focuses primarily on telephone interactions, to a customer engagement center that focuses on all media channels, contact center managers will seek to find a more clear understanding of the content and context of each and every customer conversation, regardless of channel (Davies, 2015). The IT configuration for interaction analytics system involves taking call inflow from the public switched telephone network into the contact center where the calls are recorded and stored with metadata. Contact center personnel can then access the recorded call data and perform analysis using the speech analytics technique. The analytics system typically offers a suite of web-based interaction analytics tools that extract, process, analyze, and visualize customer interaction data information on a large scale. The system gathers this information through sophisticated data mining rules, analysis engines, call flow event analysis, and monitoring agent screen-activity (Nice Systems, n.d.). When looking specifically at audio mining and speech analytics systems, these are systems that enable contact centers to analyze recorded calls to gather information about customer call interactions through key phrase, phonetic, or transcription technologies to extract insights from recorded telephone calls. These systems can be used as a tool to manage customer service quality through a quality management program aimed at delivering improved customer satisfaction according to key quality metrics.

3.3. Monitoring performance By using the interaction analytics tool, a multi-dimensional approach can be applied to categorizing calls according to customer interaction content and the specific characteristics of that interaction. The solution correlates interaction types with agent performance, enabling a side-by-side comparison of agents and groups, discovering metric dependencies, and monitoring performance over time. Interaction analytics enable evaluators to pinpoint the customer interactions that are not in alignment with quality metrics in order to gain valuable insight as to what went wrong or what went right with a particular customer interaction. The results of the interaction analytics system are displayed in computer dashboards so that agents and managers can visually monitor, in real time, how well agents, groups, or departments are meeting their customer service quality goals (Nice Systems, n.d.). 3.4. Feedback and training Using analytics to implement a CSQIP, call center managers can create individual coaching packages for CSAs within the system, consisting of recorded calls that exhibit the optimal customer interaction, which can be shared with an agent who needs to improve upon that particular performance metric so that the agent can better understand what needs to be fixed, make adjustments, and improve performance on that metric. In addition to the tailored coaching packages, managers can compile internal communication packages to be released to a team of CSAs that may contain new training materials or new regulatory requirements so that CSAs can efficiently receive timely information into their dashboard that they can review with minimal disruption to their workflow. As the Program continuously highlights new trends, managers create tailored one-on-one or small group coaching opportunities to work with agents in person to discuss their performance using actual call recordings and quality trend reports to help the agents understand and improve their performance. The ability to continuously identify quality trends is also critical to maintaining strong training programs that are up to date and can address critical customer service issues. Thus, CSQIPs can continuously update the training materials and create enduring training programs that foster quality consistently across the network.

3.5.1. What is speech analytics? Speech analytics refers to automated methods of analyzing speech to gain greater insight into customer interactions and individual agent performance (Nice Systems, 2015), which has also been applied to nonbusiness settings such as collaborative group work among students to gauge quality of teamwork by predictive modeling with classification algorithms (Bassiou et al., 2016). Newer real-time speech analytics techniques can detect and classify the states of the speaker including emotion, sentiment, cognition, health, and communication quality (Tsiartas et al., 2017). Speech analytics systems often include qualitative and quantitative speech analysis. Qualitative measures include automatic speech recognition, where spoken words or phrases are identified to create categories or topics of discussion through key words, as well as identifying the emotional character of the speech being analyzed. Quantitative performance metrics typically include “Call Hold Time,” “Average Speed to Answer,” and “Length of Call.” Insights from speech analytics systems can then be used to classify calls, trigger alerts and workflows, and drive improvement of operational and CSA performance (Davies, 2014).

• Coaching Packages: Each CSA receives a “coaching package” based



on a selection of calls that were evaluated and shown to be well above or well below the quality threshold. A coaching package can include a recording of the call itself with the overall score of the call, the individual scores for each of elements contained in the scorecard that was used to evaluate the call, and any comments the manager may wish to include. Typically, the manager’s comments will highlight what the CSA has done particularly well and/or anything the CSA may need to improve upon. This information can be provided via email or it can be received directly through the CSA’s call analytics dashboard application on their work station computer. In-Person Coaching: Managers must provide CSAs with regular performance feedback in person to provide critical feedback on what went well and what did not go well. The cadence of in-person coaching will differ across different businesses, but in this case study, managers met with each CSA twice per month for 30 min at each visit. During coaching sessions, managers provide a summary of the quality performance results for the last period, listen to and review any calls that scored well above or below quality thresholds, identify areas for improvement with coaching advice to help achieve quality goals, and offer positive feedback for calls that were handled well.

3.5.2. How does speech analytics work? Speech analytics goes beyond identifying spoken words. It applies linguistic and semantic analysis to verbal conversations in order to understand the topics discussed, their context, and the sentiment of the speakers during the interaction (Nice Systems, 2015). There are three main recognized approaches to analyzing and understanding speech recordings within contact centers: phonetic, transcription, and phrasematching. The phonetic approach converts audio into a sequence of sounds called phonemes, which are the basic units of communication in a language. The output is a phonetic index, which is a matrix of probabilities that each phoneme was spoken at each moment in a call. Phonetic engines can look for patterns of phonemes that don’t make logical sense, such as a long string of vowels or consonants, and edit the probability index accordingly to improve its likely accuracy. The

3.5. IT architecture and functionality of speech analytics The increasing use of speech interaction analytics by contact centers 3

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

3.6.2. Balancing efficiency metrics and quality metrics Key performance metrics, sometimes called “key performance indicators” or “KPIs”, are quantifiable and strategic measurements that reflect an organization’s critical success factors and serve as a guide to make strategic decisions (Brooks, 2005), particularly with regard to implementing a quality management program to improve customer service. Contact center managers monitor customer service agents’ performance with respect to quality by selecting a specific set quality metrics that will deliver improved customer service. Performance metrics are generally divided in to two categories. The first category contains quantitative measures pertaining to “workforce management.” These metrics generally measure how efficient agents are in handling calls. The second category contains the measures that focus on the quality of customer service, often referred to as “customer experience.” Workforce management performance metrics are used to manage agent shift schedules, etc. and tend to focus on metrics such as “call volume”, “average handling time”, “average hold time”, and “average speed to answer.” This set of metrics contains quantitative measures of efficiency used to look at how quickly calls are being processed. Quality or customer experience metrics can include measures such as “appropriately closed call”, “offer empathy”, and “manage dead air”. This set of metrics is focused on qualitative measures of customer experience that tend to be more difficult to measure. Call quality management has become a key differentiator and an important area of contact center management strategy, so the challenge for management is to find the right balance between efficient call handling with high quality call handling in order to maintain a cost effective contact center that also consistently delivers optimal customer experience.

transcription-based approach, which is more formally known as largevocabulary continuous speech recognition (LVCSR), converts the audio into the most statistically likely word sequence based on the combination of acoustic evidence and a relevant language model. The third technique, known as phrase-matching analysis, looks for longer phrases within the raw audio based on a prerecorded template of their frequency vectors. These vectors extract all the key features from the audio using different techniques (Davies, 2013). Each approach has advantages and disadvantages. With phonetic systems, the initial processing of the audio is very fast, which allows large volumes (tens of thousands) of calls to be processed each day from a single server. However, even with fine tuning, the accuracy of the final transcription rarely exceeds 60%. With the transcription approach, because of the added complexity that is associated with the triphone approach and language model alignment, the processing phase is considerably more processor-intensive, which can limit the processing speed and scalability of the deployment. The triphone approach significantly adds to the complexity but provides a more advanced view of the phonetic data that is available. The phrase-matching approach adds a third layer of complexity that further increases the processing burden, where a detailed understanding of the key phrases that are used in each language and industry is required. Here too, the processing speed and scalability are affected, and it also takes time and effort to compile the lists of phrases, although the vendors are gaining experience (Davies, 2013). Speech analytics technologies are becoming more accurate and scalable and investment in these systems has accelerated rapidly in the last few years, although they have not yet become widely adopted. Organizational interest is now high due to the potential to uncover insights that can enhance operational efficiency, understand the customer experience, improve agent performance, and increase customer loyalty (Davies, 2014).

3.6.3. Incorporation of employee perspectives Employee satisfaction is the most important predictor for customer satisfaction and employee loyalty to the firm, and is the primary mediator of a customer-oriented management style to the customers’ level (Kantsperger & Kunz, 2005). According to this research, employee satisfaction plays a key role in building intense customer relationships. Therefore, companies should consider the needs of their customers as well as their employees, and possibly initiate measures to foster employee satisfaction. Kantsperger and Kunz (2005) also note that a strict emphasis of quality measures can also discourage employees. Although the value of quality programs is indisputable, it is recommended that management take into consideration the perspective of the employees because a strict quality orientation could signal a low level of trust toward the employees, which in response could decrease intrinsic motivation and employee satisfaction. Therefore, firms should invest in their employees and apply an employee-oriented management style that will increase employee loyalty, decrease turnover, and increase customer satisfaction.

3.6. Key factors to consider for implementation Implementing a CSQIP at a call center using speech analytics is not an exact science. Successful deployment requires more than just the purchase and installation of an appropriate system. Implementation often requires comprehensive business process changes in areas such as change management, hiring, new hire onboarding, training, coaching, employee engagement, talent management, performance evaluation, and continuous improvement. Pilot projects have been found to be valuable to help ascertain the potential impact of the adoption of a CSQIP, test the process changes, and evaluate the results of the new technology being applied. Presented below are brief descriptions of key factors when implementing a Program at Crossroads WellNet. 3.6.1. Interaction analytics system selection criteria and strategic fit It is important to choose the interaction analytics system that will enable the business to achieve its current customer service objectives, while also having sufficient scalability to accommodate its future needs as the business grows with the changing needs of its customers. It is important that the new technology enable the contact center to fulfill its customer service strategy not only in just a more efficient manner, but also in a way that enables the team to deliver customer service that is distinctly better than its competitors. It is advisable that before selecting an interaction analytics system, a broad range of interaction analytics system use cases should be created and reviewed along with the complimentary features of the system to ensure a good strategic fit that will deliver strong business value. All of the different service packages that are offered should also be reviewed carefully to ensure that adequate support and training will be available for the team to implement the system in the customer service model. Finally, all technical issues should be reviewed carefully to ensure proper integration with existing systems and data security protocols.

3.6.4. Call recording consent It is important that the proper consent and data capture procedures are in place to protect the rights of all parties and maintain efficient operations. Before implementing a quality management program using recorded interactions, it is important that customers are properly informed and that contact center employees and customers give proper consent to using the interaction data. 3.6.5. Call selection and agent performance evaluation Call selection and agent performance evaluation should be done in a consistent manner across all contact center locations. Agent performance evaluation should be an objective process that maintains a balance between customer service quality improvement and efficient contact center operations. It is important that contact centers clearly define performance expectations for new hires versus experienced contact center agents. Evaluation tools such as a “scorecard” are very useful and must be carefully constructed to properly assign weight to 4

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

KPIs, as defined and elaborated below.

each of the performance criteria and ensure that the criteria are properly aligned with contact center quality objectives. “Calibrations” are also effective tools for maintaining consistency and is also described below.

• Manage dead air: This quality metric requires that a CSA maintain

• Scorecard: A “scorecard” is a common tool used to score each se-





lected by the quality assurance team. The scorecard lists a number of weighted performance criteria that the agent is graded against for each call. The scorecard often contains customer service quality criteria and is often used to help manage levels of customer service during call handling. It should be noted that the scorecard cannot account for every scenario that may occur on a given call. The scores form the scorecard are an indicator of how well the required elements were met on that particular call. It is possible that a call can receive a passing score but still have significant issues that need to be addressed, i.e. poor attitude, inappropriate language, etc. Calibration: Periodic score calibration is a common practice used by contact centers to ensure that those who are scoring calls do so in a consistent manner. The process for call scoring calibration is to select one call or a small sample of calls and have the group of evaluators score those calls. After the individual scoring is complete, the group will display the score(s) and discuss in detail why there were any differences in the scores. The goal is for the group to reach a fair and consistent interpretation of the criteria listed in the scorecard, and thereby, consistency in the scoring process.



a conversational flow during the call with only very brief silences when time is needed to complete a task – e.g., look something up, access a website, etc. Offer empathy: This quality metric requires that a CSA recognize or acknowledge the customer’s needs, emotions, or concerns, and express the CSA’s desire to assist the customer. This also requires that the CSA use proper inflection and a tone of voice that is consistent with the agent’s expression to provide assistance – e.g., pleasant, engaged, and interested in helping the customer; not tired, annoyed, disinterested, monotone, etc. Appropriately close the call: This quality metric requires that at the end of the call, the CSA ask the customer if there is there is anything else they can help them with, thank the customer, and identify the firm the CSA represents. In the case of a transfer, the CSA is also required to inform the customer of the need to transfer the call, state where they are being transferred and why, and provide the phone number for the entity or employee as a courtesy.

4.2. Call evaluation scoring criteria and performance goals As mentioned earlier, at Crossroads WellNet, a “scorecard” approach was used to evaluate CSA performance on each call according to specific customer service quality performance metrics. Each quality metric was given a weight according to its importance with respect to delivering optimal customer service quality, noted as “points” in Table 1. A sample scorecard is shown as Fig. 1, which portrays only the key elements of an actual scorecard for illustration purposes while company-specific components have been redacted. Each scorecard evaluation yielded a numerical score for each call that was expressed as a percentage of 100 possible points. A passing score was one that was greater than 90%. The quality performance goals for this team of CSAs were the following:

4. The evaluation: KPIs, goals, data collection, and results Having implemented a CSQIP, Crossroads WellNet determined that evaluation of its customer service performance of a team of CSAs in a call center would be beneficial. An interaction analytics system was used to collect call quality performance data from 120 CSAs across multiple contact center locations in the U.S., over an 8-month period. Each contact center was located on one floor covering a large workspace with CSAs being seated in low-profile work station cubicles that included a laptop docking station with a large monitor and a telephone set/headset connected to a telecom switch that was integrated with the computer network call analytics system. The CSA supervisors were seated at the perimeter of the CSA cubicles. The call data captured in this study focused on evaluating the quality of customer service at Crossroads WellNet contact centers that was observed as calls were processed using a customer interaction analytics system. The contact centers processed approximately 4500 inbound calls, 800 emails, and 1200 outbound calls totaling approximately 6500 customer interactions per day. Most of the inbound calls were concerned with product returns, claims, pricing, or item availability. Most of the CSAs were full-time employees, working 8.5 h shifts at the contact centers, which operated 7 am–11 pm, Monday through Friday. Regular breaks were taken for lunch and personal time.

1. Average quality score taken from the monthly sample would be greater than 90%; 2. Percent of agents with passing score would be greater than 90%; and 3. Average scores for each “challenging” quality metric would be above 90%. 4.3. Sampling and data collection The call quality data in this study were taken from a series of monthly quality reports over eight months from the Program using speech analytics implementation. Each monthly report compiled quality data gathered from this team of 120 CSAs across different geographic locations. For Quality Performance Goals 1 and 2 identified above, a sample of 5 customer calls was selected from each CSA, and each of the 5 calls was scored using a scorecard containing quality metrics. The average score was calculated from the 600 scores to create each of the data points that were charted for Goals 1 and 2. For the quality measures in Goal 3, a sample of 2 customer calls was selected from each CSA, and each of the 2 calls was scored using the same scorecard as with Goals 1 and 2. The average score was calculated from the 240 scores to create each of the data points that were charted for Goal 3. The Customer Service Quality data were charted to produce the following graphs:

4.1. Definitions of key quality performance metrics As summarized in Table 1, under five categories, 17 quality key performance indicators (or KPIs) were identified. The five categories were: (a) opening, (b) professional etiquette, (c) total issue resolution, (d) case management, and (e) closing. Each category was comprised of two or more KPIs, and each KPI was assigned a code as well as total points to be used in the scorecard. The KPIs that scored consistently above the 90% threshold for all agents are considered “mastered”. As noted in Table 1, eleven KPIs out of 17 had achieved the “mastered” level at the time of this report. The three KPIs that did not score consistently above the 90% threshold are called “challenging” KPIs, and the remaining three KPIs, noted as “achieving”, showed much progress although not considered fully mastered. The current study focuses on evaluating the three “challenging”

• Average Quality Score and Percent of Agents With Passing Score (Fig. 2) • Average Score For Each of Three “Challenging” Quality Performance 5

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

Table 1 List of 17 Quality KPIs in 5 Categories. Category

Code

KPI

Points

Status

(A) Opening

A01 A02 A03 B04 B05 B06 B07 B08 B09 C10 C11 C12 C13 D14 D15 E16 E17

Appropriately greet the customer Verify account information Take ownership of the call Clear diction and pronunciation; Proper rate and volume Address customer appropriately Proper use of hold Manage dead air Offer empathy and/or acknowledgement as appropriate Appropriate agent behavior Identify customer’s primary needs; Ask probing questions Demonstrate knowledge, accurate/appropriate information Offer correct solution Verify customer’s understanding of solution/next step Case notes completed correctly Case information completed as required Offer case number Appropriately close the call

3 3 8 2.5 2.5 6 6 12 10 10 12 10 7 3 2 1 2 100

Mastered Mastered Mastered Mastered Mastered Achieving Challenging Challenging Mastered Mastered Mastered Mastered Mastered Achieving Mastered Achieving Challenging

(B) Professional Etiquette

(C) Total Issue Resolution

(D) Case Management (E) Closing Total Points of Quality KPIs

Fig. 1. A sample scorecard.

6

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

Fig. 2. Average Quality Score & Percent of Agents with Passing Score (> 90%).

Fig. 3. Average Score for Three “Challenging” Quality Metrics.

4.4.1. Goal 1: average quality score > 90% (Fig. 2) The team of CSAs achieved this goal in 7 of 8 months. The data show a consistent upward trend for this quality performance metric with the average score increasing from 88.03% to 94.86% over the 8-month period.

Metrics (Fig. 3). The three “challenging” quality metrics displayed in Fig. 3 represent a small portion of the call quality metrics that were measured each month. These “challenging” quality metrics were chosen from the larger group of quality metrics because they were the few that did not score consistently above the 90% threshold.

4.4.2. Goal 2: greater than 90% of CSAs achieve passing score of above 90% (Fig. 2) The team of CSAs achieved this goal in 1 of 8 months. The data show an overall upward trend for this quality performance metric with the percent of agents with a passing score increasing from 66.67% of agents to 92.74% of agents receiving a passing score of > 90% over the 8month period.

4.4. Results The results of this study focus on CSA performance with regard to the three quality performance goals described above. The CSA performance measures with regard to the three goals are evaluated below. Each goal is used to evaluate the success of the team to improve the quality performance metrics and serve as the basis for gauging the efficacy of the implementation of the CSQIP using speech analytics. 7

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

4.4.3. Goal 3: average quality score > 90% for each of three “challenging” quality metrics (Fig. 3) The three “challenging” quality metrics displayed in this graph represent a small portion of the call quality metrics that were measured each month. They are “manage dead air”, “offer empathy”, and “appropriately close the call.” these metrics represent “soft skills” that the management team wanted to improve using the interaction analytics system and were the few metrics that did not score consistently above the 90% threshold. The results of the agent evaluations for each of the three “challenging” quality metrics are summarized below.

5.1. Key benefits and challenges Speech analytics systems offer significant benefits for improving customer service quality metrics. From an operations perspective, interaction analytics enable efficient call monitoring, sampling, and reporting in order to centralize and standardize the delivery of high quality customer service across multiple contact center locations. Interaction analytics systems enable contact center managers to identify specific CSA performance behaviors and trends that can be used to create tailored coaching experiences for individual agents to improve specific performance metrics as well as to continuously update general departmental training for new hires with training that reflects current customer and CSA needs. In addition, interaction analytics systems can characterize customer interactions and analyze them in real time, enabling call centers to be more proactive about quality service provision. In doing so, the system can further categorize customer interactions according to content and characteristics – e.g., key word, length of interaction, reference to emails, tone of speech, etc. It then can associate them with a set of predefined performance metrics and allows managers to monitor, in real time, how well CSAs, groups or departments are meeting their goals. Benefits of this functionality to managers are numerous. For example, as incoming calls entering the “late delivery” category, they can be further analyzed as a data set to find root cause (e.g., traffic, weather, etc.) and then outgoing calls to customers can be made to inform customers of the service issue and resulting late deliveries that will be experienced that day. Using speech analytics to improve customer service quality also poses some key challenges. From an operational perspective, one of the main challenges is that a significant number of changes will likely need to be made to current call center processes in order to implement the interaction analytics system and to enable it to run efficiently. From a technical perspective, it is a challenge to get all contact center personnel properly trained to master the use the technology in order to gain the full benefit. Multiple training sessions need to be prepared depending on user needs, and the training sessions themselves pose scheduling issues as CSAs can only be taken off calls to attend the training sessions for short periods of time in order to maintain service quality standards. Also from a technical standpoint, it should be noted that the process of defining the customer interaction categories according to their content can be also challenging. For example, using a key words such as “upset” to categorize a call as “dissatisfied customer” is not a straightforward task of configuration because customers may use words other than “upset” to express their dissatisfaction and expressions may vary widely depending on the region. As a result, creating taxonomy to effectively categorize call center interactions requires additional time to refine the set of keywords as well to adjust the linguistic and semantic rules that are applied to customer interactions so that the system correctly and reliably categorizes incoming calls for analyses in real time. From a cultural perspective, it can be a challenge for contact center personnel to get used to the increased surveillance and call monitoring requirements that are often a part of speech analytics systems, which is discussed in more detail in the next section.

1. Manage dead air: The team of CSAs achieved this goal in 0 of 8 months during this period. However, the data show a strong upward performance trend for this metric with the average score increasing from 44.40% to 73.59% over the 8-month period. 2. Offer empathy: The team of CSAs achieved this goal in 1 of 8 months during this period. However, the data show a steady upward performance trend for this metric with the average score increasing from 74.45% to 91.06% over the 8-month period. 3. Appropriately close the call: The team of CSAs achieved this goal in 4 of 8 months during this period. The data show a steady upward performance trend for this metric with the average score increasing from 78.14% to 93.21% over the 8-month period. Although the team of CSAs did not fully meet the 3 quality performance goals, the data show significant overall improvement. The CSA performance with respect to Goal 1 indicates that, overall, the team is able to maintain high quality scores very close to or above the 90% threshold. The CSA performance with respect to Goal 2 indicates a steady increase in the number of agents who were able to achieve a passing score (> 90%) with scores increasing from 66.7% to 92.74%. When looking at Goals 1 and 2 together, the data suggest that the team of CSAs can be divided into two groups: one large core group of CSAs who are able to consistently achieve passing quality scores (> 90%) and demonstrate that they have mastered the quality customer service metrics, and a smaller second group of CSAs who were unable to consistently achieve passing quality scores. One can infer that the strong performance of the large core group kept the average quality score consistently near or above the 90% threshold despite the lower scores from the smaller group of underperforming CSAs. The score decrease in October 2014 and November 2014 with regard to Goal 2 performance could be a result of external events such as bad weather, which could cause service issues at other parts of the organization that resulted in a spike in calls to the call center, which affected quality performance scores from the sheer volume of calls to be handled. The CSA performance with respect to Goal 3 indicates that, overall, the team showed significant improvement in all 3 “challenging” quality metrics. The team of CSAs showed the ability to appropriately close the call over 78% of the time with steady improvement toward achieving the goal of 90%. The team of CSAs were able to offer empathy to customers over 74% of the time with steady improvement toward achieving the goal of 90% and did achieve the goal in the final month of the study. The team of CSAs were most challenged by being able to manage dead air with scores ranging between 44% and 73%, but showed the strongest rate of improvement over the 8-month period toward achieving the goal of 90%.

5.2. Managerial implications The managerial implications of this study are quite significant. The findings should help managers understand that interaction analytics systems are powerful tools that can be used effectively to centralize and standardize the delivery of high quality customer service across multiple locations by leveraging the speech analytics technology, whereby they can manage the development of specific performance behaviors in individual CSAs that are aimed at delivering high quality customer service. Although the power of interaction analytics has been noted, it is also important to note that good management practice using this

5. Lessons learned: implications, limitations, and further research The above evaluation supports the original hypothesis that a CSQIP using a speech analytics system can be an effective strategic initiative to improve the quality of customer service at call centers as evidenced by the significant improvement in CSA performance with regard to the quality metrics that were used to evaluate progress toward achieving customer service goals. 8

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

Further research could also explore the effects of quality performance metrics on different dimensions of customer satisfaction. Certain types of quality performance metrics may be more relevant to particular aspects of customer satisfaction, and this relationship may also depend on the type or context of service. For example, quality performance metrics such as “accuracy of information” might be more important than “offers empathy” in different service industries (e.g., investment banking). Future studies could also examine the relative importance of different quality performance metrics between different industries.

technology should seek to ensure effective coaching (Martin, 2017), performance management (Gil, Iddo, & Dana, 2014; He, Wang, Zhu, & Harris, 2015;), and hiring practices (Griggs et al., 2016). In fact, there is a rich discussion of the critical role played by supervisor-CSA dynamics in call center environments, such as the positive impact of supervisory support structure to recover from a service failure (Oentoro, Popaitoon, & Kongchan, 2016) as in the case of other service-oriented industries (Guchait, Paşamehmetoğlu, & Dawson, 2014). By having this innovative technological support and traditional managerial acumen in place, managers will be able to avoid the pitfalls of inadequate training, ineffective coaching, and inadequate performance evaluation as well as hiring CSAs who are not well suited for learning the system. By avoiding these pitfalls, managers will be better positioned to achieve their customer service goals more quickly. While the advantages of using interaction analytics are many, managers should proceed with caution in this area. Interaction analytics tools are merely one means of improving customer service quality, and therefore, it is recommended that managers consider additional methods and incentives to develop high quality service behaviors at contact centers because it has been shown that management can err by excessively scripting and monitoring CSA behavior (Bain, Watson, Mulvey, Taylor, & Gall, 2002; Batt, 2002; Holman, Chissick, & Totterdell, 2002; Holtgrewe, Kerst, & Shire, 2001; Houlihan, 2002; Rafaeli, Ziklik, & Doucet, 2008; Tansik & Smith, 2000), and that, as indicated earlier, too much focus on quality metrics themselves can hurt morale and thus managers are encouraged to find creative ways to balance customer satisfaction with employee satisfaction (Kantsperger & Kunz, 2005) to build quality interactions with both customers and employees.

6. Concluding remarks This case study evaluated the efficacy of a Customer Service Quality Improvement Program that used the speech analytics tools to improve customer service quality at a call center of Crossroads WellNet, a pharmaceutical supply chain service provider in the U.S. The significance of this study is that, following the claim that such a Program has strategic implications, we report an implementation of a CSQIP utilizing a relatively new technology of speech analytics, and evaluate the efficacy of the Program as an effective tool for improving the quality of customer service, which can be a key differentiator that provides strategic advantage for firms competing in highly competitive and concentrated markets. The results presented in this study demonstrate that measuring and analyzing KPIs with speech analytics can be an effective technology to improve the delivery of quality customer service and enable the organization to standardize and centralize the delivery high quality customer service across multiple locations in the enterprise. References

5.3. Limitations and suggested further research Armistead, C., Kiely, J., Hole, L., & Prescott, J. (2002). An exploration of managerial issues in contact centers. Managing Service Quality, 12(4), 246–256. Bain, P., Watson, A., Mulvey, G., Taylor, P., & Gall, G. (2002). Taylorism, targets and the pursuit of quantity and quality by call center management. New Technology, Work and Employment, 17(3), 170–185. Bassiou, N., Tsiartas, A., Smith, J., Bratt, H., Richey, C., Shriberg, E., et al. (2016). Privacy-preserving speech analytics for automatic assessment of student collaboration. 17th annual conference of the international speech communication association, proceedings: 2016, (pp. 888–892). Batt, R. (2002). Managing customer services: Human resource practices, quit rates and sales growth. Academy of Management Journal, 45(3), 587–599. Belt, V., Richardson, R., & Webster, J. (1999). Smiling down the phone: Women’s work in telephone contact centers. Paper presented at the centre for economic performance call centre conference. Bordoloi, S. K. (2004). Agent recruitment planning in knowledge-intensive contact centers. Journal of Service Research, 6(4), 309–323. Boussebaa, M., Sinha, S., & Gabriel, Y. (2014). Englishization in offshore call centers: A postcolonial perspective. Journal of International Business Studies, 45(9), 1152–1169. Brooks, M. (2005). Defining and measuring KPIs and metrics. Business Intelligence Journal, 10(3), 44–50. Brown, S. W., & Swartz, T. A. (1989). A gap analysis of professional service quality. Journal of Marketing, 53(2), 92–98. Cronin, I. I., & Taylor, S. A. (1992). Measuring service quality: A reexamination and extension. Journal of Marketing, 56(3), 55–68. Davies, J. (2013). How to select the best contact center speech analytics technology. Gartner Retrieved from www.gartner.com. Davies, J. (2014). Market guide for contact center speech analytics. Gartner Retrieved from www.gartner.com. Davies, J. (2015). Are you ready for real-time speech analytics? Gartner Retrieved from www.gartner.com. Farshchian, B. A., Vilarinho, T., & Mikalsen, M. (2017). From episodes to continuity of care: A study of a call center for supporting independent living. Computer Supported Cooperative Work, 26(3), 309–343. Frenkel, S. J., Tam, M., Korczynski, M., & Shire, K. (1998). Beyond bureaucracy? Work in contact centers. International Journal of Human Resource Management, 9(6), 957–979. Garvin, D. A. (1983). Quality on the line. Harvard Business Review, 61(5), 65–75. Gil, L., Iddo, G., & Dana, Y. (2014). Spending more time with the customer: Service providers’ behavioral discretion and call-center operations. Service Business, 9, 427–443. Griggs, T., Eby, L., Maupin, C., Conley, K., Williamson, R., Griek, O., et al. (2016). Who are these workers, anyway? Industrial and Organizational Psychology, 9(1), 114–121. Guchait, P., Paşamehmetoğlu, A., & Dawson, A. (2014). Perceived supervisor and coworker support for error management: Impact on perceived psychological safety and service recovery performance. International Journal of Hospitality Management, 41, 28–37.

Since we had a relatively narrow scope of data that were limited to one industry over an 8-month period, generalizability of our findings beyond this particular context may not be widely warranted. However, the quality performance metrics identified in this study appear to be relevant to other types of services provided in other industries. Ultimately, it is important to continue this line of research by examining quality performance metrics in a wide array of service contexts as the technology of speech analytics matures. Future research may identify additional quality performance metrics that are relevant in other service contexts and perhaps study the impact of these performance metrics in new service contexts to determine their impact on customer service. Further, it still remains to be proven that, through interaction analytics, CSAs are able to consistently achieve internal quality goals such as customer loyalty is increased, customer satisfaction is increased, and lost revenues due to customer dissatisfaction are decreased. Additional research in these areas should be done to better assess the impact of CSQIPs on the business. Customer surveys or correlation studies to evaluate the connection between reaching quality goals and realizing improvements in customer satisfaction and customer loyalty would provide valuable insight. There is need for research on additional variables such as customer inputs (e.g. education, culture, age, buying behavior, etc.) that may directly or indirectly influence the customer’s evaluation of quality performance and satisfaction. Also, from an internal perspective, future research may evaluate the impact of the use of interaction analytics tools at call centers on employee job satisfaction and attrition. There is also further need for research to find correlations between outside events (e.g. weather incidents, system malfunctions, etc.) and quality scores in order to factor out how unplanned events impact customer satisfaction. Such a study would show how different events affect specific quality metrics and their impact on customer satisfaction, from which proactive strategies could be devised for managing quality during unplanned events. 9

International Journal of Information Management xxx (xxxx) xxx–xxx

S. Scheidt, Q.B. Chung

service recovery performance: The moderating role of personality traits. Asia-Pacific Journal of Business Administration, 8(3), 298–316. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future research. Journal of Marketing, 49(4), 41–50. Procter, R., Wherton, J., Greenhalgh, T., Sugarhood, P., Rouncefield, M., & Hinder, S. (2016). Telecare call centre work and ageing in place. Computer Supported Cooperative Work, 25(1), 79–105. Rafaeli, A., Ziklik, L., & Doucet, L. (2008). The impact of contact center employees’ customer orientation behaviors on service quality. Journal of Service Research, 10(3), 239–255. Rogelberg, S. G., Barnes-Farrell, J. L., & Creamer, V. (1999). Customer service behavior: The interaction of service predisposition and job characteristics. Journal of Business and Psychology, 13(3), 421–435. Shim, J. P., Koh, J., Fister, S., & Seo, H. Y. (2016). Phonetic analytics technology and big data: real-world cases. Communications of the ACM, 59(2), 84–90. Stuller, J. (1999). Making contact center voices smile: A business case for better training. Training, 36(4), 26–31. Sullivan, J. R., & Walstrom, K. A. (2001). Consumer perspectives on service quality of electronic commerce web sites. Journal of Computer Information Systems, 41(3), 8–14. Tansik, D. A., & Smith, W. L. (2000). Scripting the service encounter. In J. A. Fitzsimmons, & M. J. Fitzsimmons (Eds.). New service development (pp. 239–263). Thousand Oaks, CA: Sage. Thompson, E., & Sorofman, J. (2015). Customer experience is the new competitive battlefield. Gartner Retrieved from www.gartner.com (Accessed 31 October 2017). Tsiartas, A., Albright, C., Bassiou, N., Frandsen, M., Miller, I., Shriberg, E., et al. (2017). SenSay analytics™: A real-time speaker-state platform. Proc. ICASSP 2017 March 2017. Yu, M., Gong, J., & Tang, J. (2016). Optimal design of a multi-server queueing system with delay information. Industrial Management & Data Systems, 116(1), 147–169. Zeithaml, V. A., Berry, L. L., & Parasuraman, A. (1996). The behavioral consequences of service quality. Journal of Marketing, 60(2), 31–36.

He, H., Wang, W., Zhu, W., & Harris, L. (2015). Service workers’ job performance: The roles of personality traits, organizational identification, and customer orientation. European Journal of Marketing, 49(11–12), 1751–1776. Holman, D., Chissick, C., & Totterdell, P. (2002). The effects of performance monitoring on emotional labor and well-being in call centers. Motivation and Emotion, 26(1), 57–81. Holtgrewe, U., Kerst, C., & Shire, K. (2001). Re-organizing service work: Ccall centers in Germany and Britain. Chippenham, UK: Ashgate. Houlihan, M. (2002). Tensions and variations in management strategies in call centers. Human Resource Management Journal, 12(4), 67–85. Kane, G. C. (2017). MetLife centers its strategy on digital transformation. MIT Sloan Management Review, 59(1) n/a-0. Kantsperger, R., & Kunz, W. H. (2005). Managing overall service quality in customer care centers. International Journal of Service Industry Management, 16(2), 135–151. Kettinger, W. J., & Lee, C. C. (1994). Perceived service quality and user satisfaction with the information services function. Decision Sciences, 25(5–6), 737–766. Liang, C.-C., & Luh, H. (2015). Solving two-dimensional Markov chain model for call centers. Industrial Management & Data Systems, 115(5), 901–922. Lovelock, C., & Wirtz, J. (2004). Services marketing: People, technology and strategy. Upper Saddle River, NJ: Prentice Hall. Müller, O., Debortoli, S., Junglas, I., & vom Brocke, J. (2016). Using text analytics to derive customer service management benefits from unstructured data. MIS Quarterly Executive, 15(4), 243–258. Martin, J. (2017). Ethical communication in a retail banking call center sales position. Journal of Internet Banking and Commerce, 22(7), 1–8. Nice Systems (2015). NICE Speech Analytics. [Whitepaper]. Nice Systems. Nice Systems (n.d.). NICE Quality Optimization. [Brochure]. Nice Systems. Retrieved from https://www.nice.com/optimizing-customer-engagements/Lists/Brochures/ nice_quality_optimization_-_brochure.pdf (Accessed 31 October 2017). Oentoro, W., Popaitoon, P., & Kongchan, A. (2016). Perceived supervisory support and

10