Learning from the real world—Creating relevant research designs
30
Jacqueline H. Beckley The Understanding & Insight Group LLC, Denville, NJ, United States
On context from two highly experienced members of the art community: “What is context to you?” “Answer: Context has a relationship with content. Time and community (of people) comes into play. Either way it starts with ‘the story’”—from a personal discussion on November 2017 with William Burback (former education director at MoMA and Museum of Fine Arts, Boston) and Steven Evans (former curator at DIA Beacon and executive director of FotoFest, Inc). What you will learn from this chapter: 1. How to incorporate context into your work so that it is meaningful and authentic for you (Observe the Real World) 2. A step by step approach to incorporate context into your research (Hypothesis Setting; Refinement of Research Design) 3. How to know if you have “context” incorporated into your research (Putting the Pieces Together)
Context anchored research (CAR) is any form of consumer/people research that considers the context of activity as a central defining component that drives inputs, outputs, and analysis of the study. In many ways, it is the polar opposite of traditional sensory research (panel booth surveys, descriptive) where most of the actions are highly and tightly controlled to eliminate any variable except the one being studied. CAR depends on how the primary researcher views the world or a given situation. Knowing who you are as a researcher and recognizing the biases you bring to your work will allow your research to be more robust and will allow the planner to understand what design elements may be present or absent, given one’s individual viewpoint (Beckley, Paredes, & Lopetcharat, 2012; Wallendorf & Brucks, 1993). Why? Incorporating context into research plans means that multiple, meaningful actions may be occurring simultaneously. There are multiple contexts operating at any given time (Baker & Allen, 1968). Understanding how you, the researcher, are making sense of the situation will inform the decisions you make regarding the following questions: (a) what is the context?, (b) how will I incorporate it into my research?, and (c) and how will this inform the analysis I make regarding the entire body of research (Shapin, 2010)? Failure to consider the impact you, as the individual researcher, bring to context research is risky since the logical/intellectual trap you construct will be the boundary of any of your conclusions (Prasad, 2014). A very handy and useful resource is Wikipedia’s list of cognitive bias and the Cognitive Bias Codex graphic (Fig. 30.1), created by J. Manoogian, to visualize the complexity of bias. Context. https://doi.org/10.1016/B978-0-12-814495-4.00030-1 Copyright © 2019 Elsevier Inc. All rights reserved.
632 Context
Fig. 30.1 Graphic adaptation of a range of human biases.
Learning from the real world
633
The graphic considers four categories that lead to biasing (not enough meaning, too much information, what should be remembered, and need to act fast) and links these four categories to an extensive list of effects and behaviors. Next, one has to acknowledge that any study of context will involve memory, the experience today, and anticipation of the future (Radvansky & Zacks, 2014). As a result—all context research has density that extends beyond the interval of the research and may layer one event on top of the other. Fig. 30.2 demonstrates the continuum for one event. Imagine a cube in which the numerous pasts/presents/futures could exist at the same time or at least relate to one another. Since the timeframe within context research can stretch from a time long ago to a time well into the future, one can get more reliable research results by understanding the framing of the research (Tversky & Kahneman, 1981). Framing serves to set boundaries that can enable a conversation regarding a more specific time. Storytelling, as a human vehicle to communicate life experiences, has become a popular tool in many forms of research and is helpful in defining the origin of certain beliefs and values. It anchors those aspects of a personal story into a timeframe, thus enhancing the specificity of the context one is dealing with during CAR research (Dahlstrom, 2014; Holt, 1995; Little & Froggett, 2009). We make a full circle back to YOU, when we consider storytelling and its reliance on listening to the storyteller, forming a connection to the story through the listening (thus through your bias) and therefore figuring out your understanding of the story being told (through your perspective, bias, and experience) (Wallendorf & Brucks, 1993). Fig. 30.3 is an illustration of the complexity within this simple idea. It becomes clear, given the criteria above, why few have seriously considered context in their traditional research initiatives. It can be messy and difficult for others
Fig. 30.2 Context research includes the past, the present, and the future.
634
Context
Fig. 30.3 YOU are integral for reliable context research.
to replicate, leaving some scientists concerned about its validity and reliability. Yet, we will illustrate how it can be done in a reliable fashion, while embracing scientific principles.
30.1
Observe the real world
As scientists and researchers, many of us are fascinated by the world around us and are curious to understand what makes it work and what makes everything in that world work. Yet, early in our training as people and later as researchers, few are coached about getting out of their own head to consider how others think and perceive the world. It just isn’t the natural way of learning. This perspective is changing and that is good. What we recommend is a series of exercises and approaches that help you become surer that you are considering your perspective, then trying to see the world or a situation (context) through others’ eyes, all the while maintaining good science-based research. In the end, the goal is to have CAR that has veracity, perspicacity, and objectivity.
Start with knowledge and understanding In academic/scientific circles, researchers usually search peer-reviewed literature. When considering context research, this individual silo approach is highly limiting and enhances the biases of the individual scientist. We suggest three steps:
Learning from the real world
635
A. Conduct a robust Knowledge Mapping session (Moskowitz, Beckley, & Resurreccion, 2012). B. Create a series of journeys or treks to validate/expand your knowledge of the fundamentals of your topic. C. Develop an approximate behavioral model of what you believe is the situation/context based on the knowledge and understanding session and your in-person/in-depth journeys/ treks (Beckley et al., 2012; Moggridge, 2007; Morales, Amir, & Lee, 2017). A. Conduct a robust Knowledge Mapping session
Today, the generation of data, whether from classic peer-reviewed sources or other forms of media such as books, newspapers, blogs, and all forms of social media sources, must be considered to do immersive context-based research. Why? The pace of all of these forms of data and their impact on decision making and choice can radically alter the consequences of any given situation. One can no longer rely solely on forms of knowledge of the past. That said, how is one to know what is true or merely imaginary facts (Cull, 2014)? The establishment of a process called Knowledge Mapping, which has a number of rigorous steps, is one approach that is both robust, yet fast, and allows testing of many aspects of confirmation bias at the start of any CAR (Anderson, 1986; Moskowitz et al., 2012). For CAR, Knowledge Mapping goes far beyond mere knowledge categorization ( Jacobs, 2017). The essential features of Knowledge Mapping can be found in Table 30.1. The 9 key points to consider for CAR research with the Knowledge Mapping process are: 1. Include both explicit and tacit sources of data. For effective CAR, you need to know what is known in the area and what is unknown. If the CAR research is designed for validation purposes, what is believed to be known will influence how the entire evaluation scenario is created. If, in contrast, the research is being designed for discovery purposes, be familiar with unknowns or imaginary facts within the knowledge web that can be incorporated into the test plan, design, and ultimate analysis. Explicit (data that can be found from reports, academic articles, e.g., anything that explicitly reports information) and tacit
Table 30.1 Knowledge map tools 1. Room with empty walls. 2. Rolls of white or brown butcher paper. 3. Post-it Notes. Each participant (so-called Knowledge Mapper) ideally has his or her own color note. 4. The knowledge. 5. Various writing instruments that allow individuals to pick the writing instrument that works best for them. 6. Tables on which to write. 7. Participants—at least two; ten to twelve people are better. 8. Three to four hours. 9. Clear definition of what the question is that must be answered and mapped. 10. Facilitation leadership that has experience with achieving collaborative knowledge sharing.
636
Context
2.
3.
4.
5.
6.
7.
(generally information that experts or others have in their thoughts and may not be found written down or validated in a traditional way) sources of data enable researchers to have a broader vision of those factors to include or dismiss from design planning. Bring in the data that is broadly considered relevant and data that might be questioned as valuable. A broad, diverse collection of data will allow the group to review and make a more holistic judgment regarding what knowledge should be considered (Cull, 2014; Logan, 2009; Tomasello, 2014). Why? Because if you, as the facilitator of the session, cherry pick information, part of your bias will begin to infiltrate the thinking process. Additionally, consider the data itself and not the “freshness” of the data—some companies feel that data older than 5 years at the point of knowledge mapping is “too old”. The tendency to want to delete or forget important aspects of knowledge should be avoided (Dalio, 2017). Schedule enough participants to read through the knowledge in a meaningful period of time—generally ½ to 3/4th of a day for a given topic. Why? This is about the amount of time that most companies will allocate today for a process like this. While this level of time may seem narrow, it is far better to take this thorough yet limited time rather than to go with one’s own opinion or hunch and bias assumptions regarding the knowledge already existing. The amount of data to be reviewed typically guides the number of participants—if there are several hundred documents, of which 1/3 might be more popular media and not research reports, 8–10 participants are adequate; more than 18 participants shifts the job of managing the session for a total knowledge session to both knowledge and group organization (more skills required of the organizers). Read from a physical document rather than a “soft” (electronic version) copy. Have the participants write on Post-it note paper that summarize “sound bites” rather than use electronic tools. While this approach may be considered “old fashioned”, a body of knowledge is building to suggest it provides much better cognitive outcomes for the research team (Mueller & Oppenheimer, 2014) Why? As the research has shown, the physical act of reading and writing does have a potent impact on what is remembered, at least for a period of time, and this will yield more successful vetting of the data when it is reviewed by the participants. Select a broad cross-functional group of participants for the knowledge mapping, to allow for the best discussion of the “facts” and non-facts Why? A broad group of thinkers who are not like-minded will help identify the likely areas of confirmation bias any research like this needs to avoid (Sloman & Fernbach, 2017). Visualize the data in categories that are created by the data itself and not via pre-existing categories (Tufte, 1983) Why? A priori categorization is a perfect way to get what you have always gotten and to not consider the impact of context on your research. While you may opt to adjust the categories following thorough vetting and a strategic review of the data, don’t fall into the trap of developing categories BEFORE the session and not seeing how the data presented demonstrates what is really being considered in the current category of research (Moskowitz et al., 2012). Discuss the data by category. A preferred method is to have one member of the group read the “sound bites” aloud and to have the rest of the group listen and then discuss the topic fully with verbatim note taking being done to codify the discussion. Participants who are part of the vetting need to be part of the listening and group discussion
Learning from the real world
637
Why? Open discussion regarding specific data that has been heard by the entire group allows a clarity of understanding as to what the meaning of the data is and how varied specific information can be “heard” by multiple individuals. This allows participants to know what is really understood, known, and agreed upon by a group and also to have dialog around what isn’t clear, is rather unknown, and not founded in consensus. These knowledge gaps become the areas that can be pursued by research. The capability to design the research to enable greater consensus is heightened (Beckley, Herzog, & Foley, 2017; Sivasubramaniam, Liebowitz, & Lackman, 2012). 8. Once the full vetting of the knowledge has been completed, immediately turn the results into knowns and unknowns. Prioritize both the knowns and unknowns and turn the unknowns into the strategic research activities needed to advance understanding. Ideally, the research plans progress rather quickly following the knowledge mapping session. What is being done during this phase is blending of both the knowledge work with thinking that can be more design-like or business-oriented. Rylander (Rylander, 2009) discusses the fine dance between these disciplines where problem solving needs to blend “rational, analytical, and intellectual approaches with interpretive, emergent, and explicitly embodied approaches” Why? Moving the activity of mapping and the engagement of dialog and discussion to a rapid outcome allows for a well-developed learning plan that has the benefit of many stakeholders. If done directly following the discussion, these stakeholders are more likely to have a broader base of common knowledge, and therefore shared goals, as compared to individual knowledge that lacks the common link (Aumann, 1974). 9. If time allows, it is acceptable to have individuals who haven’t been part of the #8 activity to review the findings and how vetting and outcomes have been achieved Why? Other stakeholders who are familiar with the process but have not been part of the knowledge sharing can easily see the gaps that can occur during any group process (Kahneman, 2011) (group think), and can therefore strengthen the conclusions/plans that are presented, further reinforcing the entire knowledge mapping session and its outcomes of shared knowledge (Moskowitz et al., 2012; Rylander, 2009). B. Create a series of journeys or treks to validate/expand/refute your knowledge of the fundamentals of your topic
Through the Knowledge Mapping session, it becomes very clear where there are learning opportunities that will inform CAR. While large scale quantification of knowledge gaps may be needed, a very successful approach coming out of the knowledge session is to have a number of in-depth exercises to fully experience context and situations that may be viewed as relevant to research designs. These initiatives are generally not pilot studies since they need to be less formed than a pilot, but rather a well-planned number of observational exercises that will anchor the researchers in the assumed context they will develop research against. Today these interventions can be called: (a) experiences, (b) treks, (c) cultural anthropology, (d) marketing ethnographies, (e) field observations, or many other versions of exploratory field engagements. There are a number of classic books that can provide readers with approaches that are well accepted (Table 30.2).
638
Context
Table 30.2 Journeys, treks, approaches to observe—A few books that give you direction 101 Design Methods—Vijay Kumar Designing Interactions—Bill Moggridge Design & Emotion Moves—Pieter Desmet, et.a. Emotional Design—Donald Norman Ethnography for Marketers—Hy Mariampolski Ideo Methods Cards—IDEO The Art of Innovation—Tom Kelly The Experience Economy, 2nd edition—B. J. Pine II & J. Gilmore The Observational Research Handbook—Bill Abrams Qualitative Research Methods Series—Sage Publications, multiple volumes (Authors include: Russell Belk, Grant McCracken, G. Psathas, C.K. Riessman)
Importantly for students of context, truly understanding what the context for the research subject matter is and assuring the chief researchers that they “get it”—that they have the capacity to step outside of their bias and expert to get “snapshots” of what reality for their context is—will ultimately make the difference between break-through research findings and findings that merely validate previous research or pre-existing frameworks of thinking. Often, for business purposes, these small-scale research initiatives may be bundled together to create a depth of understanding that is acceptable to the group participating in the Knowledge Mapping and project planning thus allowing avoidance of the original, fuller research plan. What was thought to be needed, isn’t anymore and therefore one realizes a saving of effort, time, and money. For any of these experiences, some fundamentals must be followed: l
l
l
l
See. See through the eyes of the consumer whose context you wish to measure. Lay your biases aside and be fully open to seeing through their eyes, not yours. Listen. Listen outside of your biases so that you hear others’ voices, not your voice validating itself. Hear. Make yourself willing and able to hear what you don’t want to hear, what you haven’t heard and also those sounds and voices that assist you with your understanding. Feel. Try to use your personal empathy qualities to understand the emotional elements of the context and those that are being experienced by potential subjects in those situations (Murphy, 2017).
There are several tools to facilitate seeing, listening, hearing, and feeling. Besides the creation of the model described below, simple observational tools (Kelley, 2001) can help anyone become a more mindful and aware researcher. For context research, this is essential. Considering the gaps of understanding in well-thought-out research practices prevents accepted paradigms being introduced into CAR. Thompson et al. argues that through links to what is both said and unsaid, one can begin to understand how consumers (people) make sense of the world around themselves (Thompson, 1997). C. Develop an approximate behavioral model to advance the understanding further
To be able to understand a subject in context, one needs to come at that context with theories that can be tested. In points A and B above, we introduced two very rapid and
Learning from the real world
639
easy ways to become informed about the subject around which CAR will be conducted. Creating a behavioral model during or directly following these two steps helps research assumptions begin to be visualized. The visualization can then begin to help the researcher understand if they are creating a model approach that might be credible, real behavior or if there are issues in logic that will ultimately lead to flaws in the test design and conclusions of behavior. As George E.P. Box wrote, “All models are wrong, but some are useful” (Silver, 2012, p. 230). A few approaches are typical at this stage of model building: l
l
l
Adapt a model sourced during the Knowledge Mapping. Make sure you have the proper citations, however, taking someone else’s best guess is perfectly fine (Fig. 30.4). Adapt an existing model with modified learnings and insights found during Knowledge Mapping and journey/treks (Goel, Johnson, Junglas, & Ives, 2010; Johnson-Laird, 1980). Create a communication-anchored model based on means-end chain principles (Gutman, 1982) in which the model embraces conversation and the underlying research/story. This communication-anchored model should indicate what is spoken or thought by individuals in relation to experiences and activities discussed during the story/interview. This conversation model should eventually move from a specific situation through consequences of that situation to values of a community or society (Silver, 2012).
Fig. 30.5 provides the reader with an approach to visualizing this conversation. The hypothesized model ε1
ε2
CRD1
CRD2
λy11
ζ1
γ11
Consumer density (ζ1)
λy21 Perceived crowding (η1)
β31 ζ4
γ21
ζ3
β21
β43
Pleasure (η3) β32
Consumer choice (ζ2)
λy53
λy63
PLE1
PLE2
ε5
ε6
ζ2 Perceived control (η2)
γ22 λy32
λy42
CTL1
CTL2
ε3
ε4
Fig. 30.4 Example of a draft behavioral model.
Approachavoidance (η4) λy74 PREF ε7
640
Context
Self validation Confidence Enjoyment Wellness
Comfort Savour It
Me time Indulgent
Luxury
Good to myself
Connectivity
Taste = Enjoyment pathway
Rewarding Melt in your mouth
Calms me down
Lower fat
Balanced
Guilt
Feeling relaxed
Rich
Good aftertaste
Milky
Good for me
My brand, value pathway
Not sure it is my brand
Creamy
Not sweet
Tastes like cheap “easter” candy
Fattening
Soft Bitter
Tastes expensive
In control
Antioxidants Weight control
Bad aftertaste Perfumey
Value
Smooth Chalky Sweet
Bland
Dark Hard/brittle
Light brown
My brand
Gritty
Taste
Texture
Visual
Brand
Fig. 30.5 Theoretical communication-anchored model for chocolate (value diagram).
30.2
Hypothesis setting to guide the design process
Consider the traditional scientific method Context anchored research (CAR) is complex. What is essential when we deal with this level of complexity is to implement certain fundamentals as perfectly as we can. To that end, make sure that there is a robust hypothesis setting step for CAR. We often see hypothesis setting taking place in academic research, but that may not necessarily occur in business-oriented settings. Routines and expectations of trained business professionals may lead to stepping away from this fundamental step of scientific research (or some business professionals may not understand the role hypothesis testing should play in planning any type of research). Creation of hypotheses is a key strategy when approaching CAR to create relevant research designs. Why? Since we researchers bring a bundle of biases into the research, it is important to take those biases and turn them into hypotheses that can be proved, disproved, or not observed, rather than having an ever-expanding list of questions whose purposes may be unknown, disguised, or unexplored for motive. It is helpful to incorporate a definition for what a hypothesis means: l
Definition: A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, a study designed to look at the relationship between sleep
Learning from the real world
l
l
641
deprivation and test performance might have a hypothesis that states, “This study is designed to assess the hypothesis that sleep deprived people will perform worse on a test than individuals who are not sleep-deprived.” Unless you are creating a study that is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research, and why. Remember, a hypothesis does not have to be right. While the hypothesis predicts what the researchers expect to see, the goal of research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore a number of different factors to determine which ones might contribute to the ultimate outcome.
The following outlines the most successful approach to moving from the real world to relevant research designs: 1) Start with a grid that encourages all members of the CAR to create a hypothesis, see Fig. 30.6. 2) Independently, or as a team, create the hypothesis following the format shown in Fig. 30.6. 3) Start this process as early in the planning as possible, generally after the knowledge mapping but ahead of the journey/treks so that ideas from members of the research team can begin to be visualized and teased out into hypotheses that can be incorporated into treks or the research planning. 4) Encourage as many hypotheses as possible. Once you have a working number (say 15), begin to sort them into logical categories. 5) Use the categorization process to identify areas that are too limited in hypotheses as well as those that may have redundant hypotheses, suggesting an opportunity to condense certain issues into a narrower set of hypotheses. 6) Using the hypotheses as a guide, identify the three to five overarching questions that must be answered through the CAR that are implied by all of the grounded hypotheses. 7) Use the hypotheses to create the interactions with people that will allow the CAR to work well. Rather than having a random or rote set of experiments or situations, the use of the hypotheses setting activity allows researchers to create the best context to answer the questions and ideas to allow for results that have objectivity and veracity.
2B. Refinement of design Discovery consists of seeing what everybody else has seen - and thinking what nobody has thought. John Risk, Ford manager in charge of developing the Taurus
Fig. 30.6 Example grid to use for hypothesis creation.
642
Context
A stepwise process can enable CAR, whether utilizing traditional or novel methods
Step One: Learning plan. Utilizing the hypotheses and questions, construct the series of research initiatives you need to answer the needs. The learning plan at this point may be comprehensive or narrow depending on what has been identified. For the purposes of our chapter, we will assume that a comprehensive design is needed. Step Two: Context to be researched. Identify the contexts that need to be tested and identify the best way to create those contexts. If there are budget or time constraints, or other situations that will inhibit the ability to create the best possible research design, it is important to acknowledge to yourself and your research community the trade-offs that are being made and how they may impact the veracity of the context. Avoid design modification suggestions that violate principles of the context and that do not acknowledge the augmented changes since they may impact the results in ways that may or may not be simulated. A home situation is not a central location situation. A shopping experience in a store is not an online purchase experience. A raft trip on the Colorado River is not an augmented reality raft experience. Therefore, it is important to the integrity of the research and the researchers to be very honest about what the limitations of their context will enable, or the barriers it will create, at the early planning phase. Step Three: Research Subjects. Context research does not lend itself to convenience samples. Therefore, consideration of who the research subjects should be needs to be an early stage planning consideration. Traditional recruitment approaches may need to be adapted to achieve the research goals (and the cost calculations for the project) (Charness, Gneezy, & Kuhn, 2013). Table 30.3 is a starting list of the factors that need to be considered for subjects involved in context research. Table 30.3 A few participant recruitment considerations for CAR l
What are the key demographics that the business must have, and what is not really needed in this research format?
l
Imagine yourself as the target in context—what are you thinking, what are you doing, how are you likely to answer questions? Choose the recruits accordingly.
l
In addition to the standard demographics (age, gender, product usage, income, etc.) that the team must have, create a list of statements that a person can agree or disagree to—use both statements that your target can agree to, and some you believe they would likely disagree to. Develop a metric for the recruit.
l
In the list above, include all aspects of the experience, which may include (for a purchase context) prepurchase thoughts, the actual shopping experience, use of the product, and how a person might or might not feel after shopping or using the product.
l
Use strong language, not “tend to” or “usually”, rather phrases like “I always” to limit use of the center of the scale. You really want statements your target would agree or disagree to.
l
Determine the answer pattern that fits the target. For very passionate individuals, require that a large percentage of questions be answered within that pattern (for example 7 out of 10 questions are answered in the pattern you have chosen). Criteria can always be loosened, never tightened.
Learning from the real world
643
Step Four: Research Team. Given the influence of biases and the complexity of context, the research needs a team. Having a range of individuals participating in the planning and execution will help reduce the level of individual mindsets (Backmann, Hoegl, & Cordery, 2015). For some of the qualitative components, the use of an individual when compared to a small team influences how questions are presented and impact forces that can lead to outcomes that have been driven in particular directions. While it is traditional to have a single moderator for qualitative/focus groups or for descriptive analysis, this approach, while economically reasonable, may not truly achieve context research that is perspicacious (Stewart, 1998). A context team of at least four individuals can be a starting point, however, 6–8 engaged participants is even better due to the multiple roles people with alternative mindsets can play (Mariampolski, 2006). The roles played by the individuals will be: 1. Lead researcher (primary investigator responsible for the research organization and team selection) 2. Lead facilitator (the person who will be responsible for the dialog/key questions with participants) 3. Lead observer (who may be the videographer/audio specialist; critical role in making sure there is calibration between goals and outcomes) 4. Lead context integrator (this individual is responsible for tracking all output with as little bias as possible; they assist the team in making sure objectivity is being maintained)
Step Five: Timetable to get authentic results. Before considering which methods to use, it is necessary to consider how much time it will take for the desired context to evolve (for example a product that needs to be tried over three weeks BEFORE any result should be considered visible). A thorough job of knowledge mapping and hypotheses development should allow the team to identify what conditions must be part of the research and therefore what the timeframe will need to be. For example, if one is using a product that requires people to think about a purchase, make the purchase, consider the consequences of the purchase, and perhaps compare one purchase to others, the timeframe for this context could cover multiple months. Alternatively, this could be condensed into a few days through clever design thinking, but the timetable needed to create the right context for the subjects involved has to be considered early in the planning process for CAR. Step Six: Methodologies. This is the area which can be fun, creative, or horrifying to many researchers. How does one measure effects and manage all of the moving parts? Fun if you like challenges, creative if you find matching experimental designs to needs, or horrifying if you want clear and specific methods that have completely proven execution trajectories. The goal of the methods should be to ensure the impact of context enables deeper connections with behaviors of use and choice not facilitated through methods commonly used in consumer and sensory research ( Johnson, 2012; Meyer, Crane, & Lee, 2016; Morales et al., 2017). Effective use of new technology continues to assist context research, with validated roles for photographs and video (Bateson & Hui, 1992) to the current growing area of virtual reality and everything in-between with text and image analysis (Humphreys & Wang, 2017). A term that Charness has created is extra-laboratory experiments in which the research is a bridge between traditional laboratory research and field research
644
Context
(Charness et al., 2013). For Charness, an extra-laboratory study is conducted in the spirit of laboratory experiments but in a non-standard manner and can represent a range of experiment types. In her tutorial on interviews, Arsel expands this to suggest that the researcher needs to determine which approach among the traditions of inquiry should be selected (Arsel, 2017). The traditions that need to be considered include Biography, Phenomenology, Grounded Theory, Ethnography, and Case Study and are generally felt to be the foundations for context anchored research (Creswell, 1998). Rogers was an early advocate for having a systematic process (Rogers, 1976), which is illustrated in Table 30.4. Table 30.4 presents a stepwise approach to what must be considered to create research that can deliver on context. Fourteen steps form a checklist that provides the foundation for anyone who is considering this form of research. Table 30.4 Fourteen steps to consider for context anchored research (CAR) Step
Task
Goal
Step One
Get your knowledge of the category
Step Two
Observe with eyes and heart wide open
Step Three
Create a draft behavioral model
Step Four
Create team anchored hypotheses
Step Five
Develop a learning plan that supports the hypotheses
Step Six
Lock into the context(s) to be researched
Step Seven
Consider who the subjects must be given the context(s) and how to identify/gather them
Know what is known, know what is unknown, know what are imaginary facts, begin to understand what are barriers and opportunities Use real world exploration to inform the knowledge base of Step One with a mind that has current knowledge of the issues of the category To help shape the hypotheses and design phase of the context anchored research (CAR) Create the problem-solving structure that will guide the type of context(s) that need to be designed and the stimuli and situations you need to evoke for the most reliable/actionable findings For CAR, the learning plan may have to adapt from a standard business/ consumer/sensory protocol Be clear regarding what are the needs for the context and how to achieve these (so that this research doesn’t become faux context research, e.g., regular research, understand a different category) Figuring out who the participants need to be to achieve authentic context information and how to obtain those individuals can shape the context, the team, timing, and tools
Learning from the real world
645
Table 30.4 Continued Step
Task
Goal
Step Eight
Organize the research/support team to assure that all of the needs of the context, data collection, and reporting are assembled
Step Nine
Create the correct timetable for the research to obtain authentic contextual results
Step Ten
Build the methodologies into the design that fit the needs of the context(s)
Step Eleven
Provide for adequate team download/ debriefing timing
Step Twelve
Design the data integration and strategy for topline summarization
Step Thirteen
Identify the appropriate final reporting format and timing
Step Fourteen
Create the right situation for archiving the entire context(s)
Having the right team members to support the research and help advance it is essential given the range of activities typically involved. Team members must commit to the requirements of the research and know that level of participation that is required for the research to be meaningful Context research often has extended timeframes to create/obtain the real results. Make sure this is planned for and that all team members understand commitments and the rationale behind them There is not one set context-based method. Rather there are a range of strategies that may fit the purpose of this research Reflection by team members on what they are thinking, hearing, feeling from the interventions is essential. Having time when the observations are fresh and then reflecting on those conversations is essential. Context research requires dialog, it is not a one observer type activity Prior to going into the research have a prearranged discussion about what the early “rushes” of the research must look like and what the “data” must be for a meaningful topline Context research may involve a variety of media and sources of data. Having a plan BEFORE beginning the research is important so that all of the materials can be organized for maximum current and future benefit on the topic CAR is often very important for future needs. As a result, finding a way to make it accessible so it is usable for individuals who were not part of the original research needs to be considered
646
Context
Step Seven: Data integration and Topline summarization of findings. For CAR, it is essential to integrate the results as they are collected and to do this as a team/collaborative-based activity. Unlike traditional qualitative and quantitative testing, the planning and implementation should have allowed a deeper engagement of the full CAR team. Convening debriefs or downloads at breaks in the research yet during the progression of the data collection phase, allows understanding across the team around what has been heard/felt by the team members. It must be recorded (electronically and/or by extensive notetaking at the time of debriefs) so that it can be incorporated in the final download discussion. It is recommended that the download across the team for any large-scale CAR be conducted directly following the research. Why? Engagement of the team is greatest at this time, you are likely to have the largest number of team members who have actually listened, heard, and thought about the CAR. If viewing/listening biases have filtered in, this download/debrief provides an opportunity for discussion and making sure that observations are aligned across the team and anchored in the observed phenomena of the research. The following approach for debrief/download will achieve the best research and business conclusions: 1) Prearrange the debrief/download timing 2) Use a generalized agenda to manage expectations and time 3) Assign a scribe to take physical (on computer) notes/semi-verbatim. Record session if all are in agreement 4) Typical download session format: a. Starts directly after last session, generally lasts a minimum of 2 h b. Is facilitated by either the lead researcher or lead context integrator (not lead facilitator— it is time for this person to see if the team heard/saw what they have heard, or not). The role of download facilitator is to make sure that all voices are heard, to note when comments are made that are not held by everyone, and to develop a discussion around these points to expose the points of difference. c. Initial discussion is around: (1) project/business implications, the so-what-does-thismean? The discussion is broader and moves around the team one after the other, (2) a full and complete vetting of all of the team’s hypotheses, where they are identified as: confirmed, disconfirmed, did not hear, or not explored, (3) during the hypotheses vetting, if learnings occur that modify a hypothesis, the team may agree to change the hypothesis. Observed reasons for confirming or disconfirming hypotheses also become part of this document, (4) the team discusses the relevant secondary material—tallies, data, video, or imagery that need to be included in the final report and archiving, and (5) (if time allows) creation of a qualitative Kano (Beckley et al., 2012, Chapter 6, p. 113) to provide the hierarchy of elements for the product, service, or business design. d. Conclude the download. All are thanked for their time and the download facilitator checks in with all participants to make sure that no lingering concerns around the research or the conclusions by the team remain. A commitment is made for when the final report will be received and in what format. Step Eight: Report the findings
Using the download/debrief format makes the creation of the final report much simpler to construct. The research lead facilitator, lead observer, and lead context integrator should have already developed a reporting approach with assignments of tasks and
Learning from the real world
647
the timeline. At this writing stage, current practices for final reporting are becoming more and more novel, stretching from a classical written report in either a “newsletter” format (using Word or some other writing software) or “deck” format (using PowerPoint or some other deck document creator) to totally online cloud-based reports that allow for gathering all of the media into one source and organizing for simple download or viewing on the web. The essentials for CAR final reporting vary but we would recommend to make the final document a good archive piece of research that is easy to navigate, allows for future review of the materials collected, and expands the value and usability of the work (Hill, 1993; LeCompte & Schensul, 1999; Ryan, 2014).
30.3
Putting the pieces together
The following are two examples that illuminate the key points above. The first “case study” speaks to the evolution of a researcher’s bias about the role of texture and how recognizing the role of context emerged over the years, while the second example is a simplified case study of a context anchored piece of research from another CPG (consumer packaged goods) product design perspective. A. Case Study: Evolution and Context: The “you” bias: 1. In the 1980s, this author was working many product lines for a major CPG company, with one product being ready-to-eat (RTE) cereal. It became apparent that there were different “styles” of eating RTE cereal along with different cereal design preferences. While we did not recognize or acknowledge it at that time, they could be due to the type of cereal, the use of a wetting agent (milk), how the wetting agent was applied, when it was applied, when the cereal was eaten, how it was eaten, time of day it was eaten, and where it was eaten. A method was created to understand texture over time called “Bowl Life Testing”, which tracked speed of consumption and texture over time (never published but used throughout the 1980s and much of the 1990s). While we were aware of many of the contextual factors above, our sensory focus did not provide us, at the time, with thinking about this as a context anchored problem, rather we thought of it as a product over time (somewhat time intensity)/sensory issue. 2. Move to the early 2000s. A large study called Crave It! was conducted by Moskowitz and Beckley (Beckley, Ashman, Maier, & Moskowitz, 2004) using the elements of food product design that would inform consumer choice for other foods (hopefully healthy versions). Part of the analysis revealed what appeared to be a split in gender where males indicated sight and smell were their secondary crave points (after taste) while females indicated it was texture (Beckley, 2001). This suggested to this author that there might be some “gene”/nature relationship to texture and that texture might play a different role than previously considered. 3. From 2004 through 2011, with a variety of collaborators, we began to look at how people ate food, the choices they made, and the rationale they used for selection. This was a combination of phenomenology and the ground theory forms of qualitative research and involved a number of corporate exercises that began to point to something different than texture, per se, for food choice selection. The cereal research conducted in 1980s was still a part of this author’s thinking and we began to have a much greater appreciation for behavior that we had noticed more than 20 years prior and how it persisted as a behavior over time.
648
Context
4. From 2011 to 2012, with a theory called Mouth Behavior ( Jeltema, Beckley, & Vahalik, 2015; Jeltema, Beckley, & Vahalik, 2016), a tool was created to allow broadening the inquiry from qualitative observation into quantification. The tool JBMB Typing Tool was designed to allow large scale research into this subject. 5. Global research initiatives into a range of areas such as: the tool, how well it assists those studying oral processing, relationships of mouth behavior to other texture and choice behavior have been or are being conducted to understand this new finding. 6. During many of the research papers, novel approaches to presenting the results are being used such as dynamic landscape mapping and multiple series bubble charts since the activity of Mouth Behavior can be completely contextual and need the assistance of video and other graphics to represent the conclusions. 7. This author has found that reflecting on past assumptions and training and then observing, as a scientist has shown that rethinking the long-held biases I had about texture and its role was an important personal/scientist transition point building into the recommendation to be aware of how education and training may influence ones’ ability to see factors such as context more clearly. B. Case Study: Context and its influence on a health/beauty aid product 1. The project was to understand in a more meaningful and authentic way a specific group of busy women to better create products that were relevant to them. 2. As a result, the research was designed to go to the individuals and work within their world. (Note: this was conducted around 2009–11, so certain technologies that are assumptions today weren’t present at that point). a. Participants were selected as representing the “beachhead” customer. b. They were given a range of activities designed to get them to relate their day-today reality to the hypotheses and knowledge the research/business team had created during the knowledge mapping pre-work. c. Home visits by the team were arranged. d. During the home visits, a wide range of activities could occur such as planning meals, meal prep, childcare or considerations regarding childcare (dropping/picking up from school, friends, etc.). The homework with visuals and projective imagery techniques linked to the daily activities of these women. 3. What emerged during the homework, in-home, and follow-up discussions is that there was a particular issue for these women, regardless of body type, ethnicity, and income level. All of them lacked time to feel that they looked as good as they wanted to during the family events that happened daily. a. It was often represented by comments such as “I need a third arm” or similar statements. b. When they demonstrated issues in their car or during shopping exercises, it became apparent that being able to have cosmetics available that would work with a single hand was a gap in the product array they had at that time. 4. Following the research, the context informed learning led the CPG company to: a. Further investigate prior research for clues regarding this and other insights. b. Refine the behavioral use model they had started prior to the field experiment. c. Create a design team of product/package/human factor experts to develop products that not only meet the “one-hand” need but also other design features (no melt but easy to use; no mess; etc.). d. Enable all other research that would look to refine and validate the findings from a perspective of modeling the context experience for the participants.
Learning from the real world
649
5. The research lead to a whole new category of functional product design. What these two brief examples illustrate is: a. Understanding that an individual’s bias can help or hinder the speed with which context is observed and how that context is understood. b. Having updated knowledge and knowledge mapping which then informs hypotheses, study design, and observations can advance knowledge-gathering. c. Numerous treks and observations can help reframe how a member of a team may think about a problem. d. A large number of scenarios created during the research phase can really advance thinking on a topic—dealing with context has a special way of heightening the ability to lift the insight out of the routine actions of people. e. Having a flexible approach to reporting and moving the knowledge and understanding forward can advance creation of products that we thought to be unknown prior to the research.
Context anchored research relies on creating systems that are mindful of the key factors one needs to follow for good observational work—veracity, seeking or reorienting or disconfirming observations, participative role relationships, attentiveness to speech and interactional contexts, multiple modes of data collection, objectivity, respondent validation, and perspicacity (intense consideration of the data). In the end, it is creating a trustworthy process and being strong enough in the discipline to be willing to pass control over to the context and allow the magic of research to take place.
Acknowledgments I wish to thank the following individuals for their collaboration and support of contextual methods and understanding: Michele Foley, Leslie Herzog, Dr. Melissa Jeltema, Dr. Kannapon Lopetcharat, Dr. Dulce Paredes, Alina Stelick, Dr. Howard Moskowitz, Dr. Ratapol Teratanavat, and Jennifer Vahalik. The progress we have made in understanding people in-context could not have been possible without our “team” efforts. My thanks to Dr. Meiselman, Jennifer Vahalik, and Michele Foley for the keen editing they did to allow this chapter to represent an important comment towards context and research.
References Anderson, P. F. (1986). On method in consumer research: A critical relativist perspective. Journal of Consumer Research, 13(2), 155–173. Arsel, Z. (2017). Asking questions with reflexive focus: A tutorial on designing and conducting interviews. Journal of Consumer Research, 44(4), 939–948. Aumann, R. (1974). Agreeing to disagree. The Annals of Statistics, 4(6), 1236–1239. Backmann, J., Hoegl, M., & Cordery, J. (2015). Soaking it up: Absorptive capacity in interorganizational new product development teams. Journal of Product Innovation Management, 32(6), 861–877. Baker, J. W., & Allen, G. (1968). Hypothesis, prediction, and implications in biology. Reading, MA: Addison-Wesley Publishing Co.
650
Context
Bateson, J., & Hui, M. (1992). The ecological validity of photographic slides and videotapes in simulating the service setting. Journal of Consumer Research, 19(2), 271–281. Beckley, J. (2001). Personal documents from author presented to executives at McCormick & co. regarding crave it! Research. Available upon request. Beckley, J., Ashman, H., Maier, A., & Moskowitz, H. R. (2004). What features drive rated burger craveability at the concept level? Journal of Sensory Studies, 19(1), 27–48. Beckley, J. H., Herzog, L. J., & Foley, M. (2017). Accelerating new food product design and development. Ames: Wiley Blackwell. Beckley, J. H., Paredes, D., & Lopetcharat, K. (2012). Product innovation toolbox. Ames: Wiley Blackwell. Charness, G., Gneezy, U., & Kuhn, M. A. (2013). Experimental methods: Extra-laboratory experiments-extending the reach of experimental economics. Journal of Economic Behavior & Organization, 91, 93–100. Creswell, J. W. (1998). Qualitative inquiry and research design. Thousand Oaks: Sage Publications. Cull, N. J. (2014). Africa’s breakthrough: Art, place branding and Angola’s win at the Venice biennale, 2013. Place Branding and Public Diplomacy, 10, 1–5. Dahlstrom, M. (2014). Using narratives and storytelling to communicate science with nonexpert audiences. Proceedings of the National Academy of Sciences, 111(4). Dalio, R. (2017). Principles. New York: Simon & Shuster. Goel, L., Johnson, N., Junglas, I., & Ives, B. (2010). Situated learning: Conceptualization and measurement. Decision Sciences Journal of Innovative Education, 8(1), 215–240. Gutman, J. (1982). A means-end chain model based on consumer categorization processes. Journal of Marketing, Spring, 60–72. Hill, M. R. (1993). Archival strategies and techniques. Qualitative research methods series: vol. 31. Thousand Oaks: Sage Publications. Holt, D. B. (1995). How consumers consume: A typology of consumption practices. Journal of Consumer Research, 22(1), 1–16. Humphreys, A., & Wang, R. J. H. (2017). Automated text analysis for consumer research. Journal of Consumer Research. https://doi.org/10.1093/jcr/ucx104. Jacobs, A. (2017). How to think. New York: Currency. Jeltema, M., Beckley, J. B., & Vahalik, J. (2016). Food texture assessment and preference by mouth behavior. Food Quality and Preference, 52(3), 160–171. Jeltema, M., Beckley, J. H., & Vahalik, J. (2015). Model for understanding consumer textural food choice. Food Science & Nutrition, 3(3), 202–212. Johnson, S. (2012). The art of methods mixology: A fine blend of qual-quant methods unlocking performance excellence in banking. Procedia Economics and Finance, 2(2012), 393–400. Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive Science: A Multidisciplinary Journal, 4(1), 71–115. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux. Kelley, T. (2001). The art of innovation. New York: Crown Business. LeCompte, M. D., & Schensul, J. J. (1999). Analyzing & interpreting ethnographic data. Walnut Creek: Altamira Press. Little, R. M., & Froggett, L. (2009). Making meaning in muddy waters: representing complexity through community based storytelling. Community Development Journal, 45(4), 458–473. Logan, D. (2009). Known knows, known unknowns, unknown unknowns and the propagation of scientific enquiry. Journal of Experimental Botany, 60(3), 712–719. Mariampolski, H. (2006). Ethnography for marketers. Thousand Oaks: Sage Publications.
Learning from the real world
651
Meyer, M., Crane, F. G., & Lee, C. (2016). Connecting ethnography to the business of innovation. In Kelley School of Business: Indiana University, 2016.07.001. Elsevier Inc. Moggridge, B. (2007). Designing interactions. Cambridge: The MIT Press. Morales, A. C., Amir, O., & Lee, L. (2017). Keeping it real in experimental research – Understanding when, where, and how to enhance realism and measure consumer behavior. Journal of Consumer Research, 44(2), 465–476. Moskowitz, H. R., Beckley, J. H., & Resurreccion, A. V. A. (2012). Sensory and consumer research in food product design and development. Ames: IFT Press/Wiley Blackwell. Mueller, P., & Oppenheimer, D. (2014). The pen is mightier than the keyboard – Advantages of longhand over laptop note taking. Psychological Science, [April 23]. Murphy, M. (2017). Debrief: Microsoft CEO Satya Nadella. Business Week, 46–50. December 25. Prasad, S. (2014). Towards 20/20 vision. Esomar publication series: vol. 37, Global Qual ISBM 92-831-0284-3. Radvansky, G. A., & Zacks, J. (2014). Event Cognition. Oxford: New York. Rogers, E. M. (1976). New product adoption and diffusion. Journal of Consumer Research, 2(4), 290–301. Ryan, C. (2014). Adding the sparkle to immersion research. Esomar publication series: vol. 37, Global Qual ISBM 92-831-0284-3. Rylander, A. (2009). Design thinking as knowledge work: Epistemological foundations and practical implications. Journal of Design Management, 4, 7–19. Shapin, S. (2010). Never pure. Baltimore: The Johns Hopkins University Press. Silver, N. (2012). The signal and the noise. New York: The Penguin Press. Sivasubramaniam, N., Liebowitz, S. J., & Lackman, C. L. (2012). Determinants of new product development team performance: A meta-analytic review. Journal of Product Innovation Management, 29(5), 803–820. Sloman, S., & Fernbach, P. (2017). The knowledge illusion why we never think alone. New York: Riverhead Books. Stewart, A. (1998). The Ethnographer’s method. Qualitative research methods: vol. 46. Thousand Oaks: Sage Publications. Thompson, C. J. (1997). Interpreting consumers: A hermeneutical framework for deriving marketing insights from the texts of consumers’ consumption stories. Journal of Marketing Research, 34(4), 438–455. Tomasello, M. (2014). A natural history of human thinking. Cambridge: Harvard University Press. Tufte, E. (1983). The visual display of quantitative information. CT: Graphic Press. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. Wallendorf, M., & Brucks, M. (1993). Introspection in consumer research: Implementation and implications. Journal of Consumer Research, 20(3), 339–359.
Further reading Abrams, B. (2000). The observational research book. Chicago: NTC Business Books. Belk, R., Fischer, E., & Kozinets, R. V. (2013). Qualitative consumer and marketing research. London: Sage. Desmet, P., van Erp, J., & Karlsson, M. A. (2008). Design & emotion moves. Newcastle upon Tyne: Cambridge Scholars Printing. IDEO. (2003). IDEO method cards.
652
Context
Kumar, V. (2013). 101 design methods. Hoboken: John Wiley & Sons. McCracken, G. (1988). The long interview (qualitative research methods). vol. 13. Newbury Park: Sage. Norman, D. A. (2004). Emotional design. New York: Basic Books. Pine, B. J., & Gilmore, J. H. (2011). The experience economy. Boston: Harvard Business Review. Psathas, G. (1995). Conversation analysis. The study of talk-in-interaction series: vol. 35. Thousand Oaks: Sage Publications. Riessman, C. K. (1993). Narrative analysis series. Qualitative research methods series: vol. 30. Thousand Oaks: Sage Publications. Wikipedia list of cognitive bias’s. (2018). https://en.wikipedia.org/wiki/List_of_cognitive_ biases.