Sawtooth Software RESEARCH PAPER SERIES - ACA, CBC or Both?
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Sawtooth Software RESEARCH PAPER SERIES ACA, CBC or Both? Effective Strategies for Conjoint Research Bryan K. Orme, Sawtooth Software 1997 © Copyright 1997 - 2002, Sawtooth Software, Inc. 530 W. Fir St. Sequim, WA 98382 (360) 681-2300 www.sawtoothsoftware.com
ACA, CBC, OR BOTH? Effective Strategies For Conjoint Research Bryan Orme, Sawtooth Software Over the past decade, Adaptive Conjoint Analysis (ACA) by Sawtooth Software has become the most popular software package for conducting computerized conjoint studies in Europe (Wittink et al, 1992), and probably also shares that same status in the U.S. ACA has proven to be a successful approach for conjoint studies involving more attributes than could be successfully evaluated at the same time in a full-profile conjoint interview (Johnson 1991). Choice-Based Conjoint (CBC), also by Sawtooth Software, differs from other traditional ranking- or ratings-based conjoint in that it presents respondents with potential product choices and simply asks them which option they would choose, or purchase. This approach is probably more realistic and better reflects what we actually do when evaluating and buying products and services in the real world. CBC has made choice studies accessible to researchers who do not wish to deal with the complexities of designing orthogonal full-profile choice designs. It makes little sense to argue which of ACA or CBC is the best overall approach. Ratings-based conjoint methods and choice-based methods have been shown to produce generally very similar results (Louviere 1988). And while ACA and CBC are both reliable methods with many overlapping capabilities, they are by no means substitutes. Each is appropriate under different situations. Carefully evaluating the capabilities of ACA and CBC with respect to your specific research needs will ensure a sound approach, whether it be one, or both techniques. The discussion below is not intended to be an exhaustive list of strengths and limitations, but is meant to cover key points relevant to successfully using ACA and CBC in conjoint research. Strengths of ACA 1) Can Include Many Attributes: ACA has advantages over traditional full-profile for studies which include many (7 or more) attributes (Johnson 1991). In ACA, respondents do not evaluate all attributes at the same time, which helps solve the problem of “information overload” that plagues many full-profile conjoint studies. This is perhaps one of the most important reasons for choosing ACA. Although ACA can accommodate up to 30 attributes in a single conjoint study, it is a serious mistake to add non-critical attributes just because ACA has the capacity. Every additional attribute (or attribute level) means more parameters to be estimated, which in turn means more questions to be asked, which ultimately tires respondents. Fatigued respondents typically provide bad conjoint data--particularly if they quit in the middle of the conjoint interview.
2) Individual-level Utilities: At the end of each ACA interview, a complete set of utilities is automatically calculated and stored for each respondent. This is particularly beneficial for market segmentation research, since utilities or importances can be read into a cross- tabulation package for profiling known segments, and cluster analysis can also segment respondents based on ACA conjoint information. (Sawtooth’s Convergent Cluster Analysis (CCA) package can read ACA utility files to make this an especially easy process.) Individual-level data are also beneficial for product simulations. Through ACA’s calibration concepts section, utilities are scaled such that respondents who are more likely to purchase have more impact in product simulations. Further, after merging other variables such as demographics into the utility file, results can easily be analyzed for sub- groups without having to re-estimate a new set of utilities each time, as would be the case for aggregate (group-level) methods such as CBC. 3) Interview Adapts to Each Respondent: The interview is unique for each respondent. ACA customizes the interview to ask about the attributes and attribute levels most relevant to each respondent’s purchase decision. Many respondents find that this provides a more meaningful and engaging experience. Additionally, product concepts presented side-by-side in the pairs section are designed to be nearly equally preferred based on the respondent’s previous answers. A difficult choice provides better information for further refining utility estimates than an easy choice in which one product is clearly superior to the other, as in most orthogonal full profile designs. Limitations of ACA: 1) Limited Ability To Measure Attribute Interactions: Like most conjoint methods used in practice, ACA uses a main-effects-only model. This means that each attribute is measured independent of all others. Although main-effects models ignore possible interactions, strong interactions between attributes are not likely to occur in practice, and because of that fact, the inability to measure interactions in many applications may not be of concern (Johnson and Olberts, and also Finkbeiner and Lim, 1991). However, some interactions, such as between brand and price, are common to many markets and, if strong interactions exist, the ability to measure them can become critical to the validity of conjoint studies. Premium versus discount branding is commonplace in many product categories. Buyers tend to be less price sensitive to premium brands and more so to discount brands. ACA and other main-effects models only measure price sensitivity relative to the average brand. ACA assumes approximately parallel demand curves, where the distance between brand demand curves reflects the main-effect utilities for different brands. If they exist, different price sensitivities between premium and discount brands (as reflected by different slopes in demand curves) will be lost.
Failure to specify interactions will usually pose little problem for main-effects estimates. But, failure to incorporate significant interactions can easily invalidate the results of conjoint simulations (Finkbeiner and Lim 1991). For example, assume we have conducted a conjoint study for the airline industry which includes, among other brands, a premium brand, American Airlines, and a discount brand, Southwest Airlines. Let’s further assume that American is preferred to Southwest, and that travelers are significantly less price sensitive toward American than to Southwest. (This would make sense since American tends to cater to upscale travelers, while Southwest positions itself as a low-cost, no-frills airline.) A main-effects model should correctly reflect average preference for American over Southwest. However, since main-effects models assume similar price sensitivity for all brands, they may mis-predict the market’s true reaction to price changes for brands that aren’t average. For example, if American raises its price to the highest price measured and competitors prices remain unchanged, the model will be inaccurate. Whereas in “truth” many buyers would be likely to purchase American even at the highest price, the main-effects model will have “averaged” away this lower price sensitivity over other brands and will under-predict preference for American at the highest price. Likewise, if we model Southwest at the lowest price, its “true” price sensitivity dictates that price decreases should have a strongly positive impact on share, yet the model will largely lose this effect and will under-estimate preference for Southwest at the lowest price. It is important to recognize that these limitations are not unique to ACA, but are common to all main-effects models. Furthermore, measuring attribute interactions is not entirely impossible in ACA. You may explicitly define interactions when setting up your attributes and levels. For example, a study with two brands and three price levels could be defined as a six-level “compound” attribute representing all combinations of brand and price. This approach is limited, however, since ACA only allows up to nine levels per attribute. 2) ACA Interviews Are Long And Can Be Taxing: Perhaps the most noticeable cost of individual-level conjoint data is the amount of information required from each respondent to estimate a full set of utilities. A typical ACA study asks between 30 to 60 separate questions and usually takes between 5 and 15 minutes to complete. Not only are there many questions, but a great deal of information must be read and processed, and choices are designed to be challenging. The paired comparison section can be taxing since ACA chooses product concepts that are predicted to be nearly equally preferred based on the respondent’s previous responses. In terms of utility estimation, this is good and entirely intentional, since answers to challenging pairwise questions provide good information for refining utility estimates. Even though the ACA interview can be lengthy and challenging, a recent study found that respondents perceive ACA to be more enjoyable and to take less time to do than traditional, full-profile conjoint (Huber et al, 1991). This finding is not surprising since ACA shows fewer attributes at a time, and due to its adaptive ability to customize the interview for each respondent.
3) Impact Of Critical Factors Such As Price Can Be Underestimated: Some researchers have reported that as the number of attributes in ACA studies increases, the impact of critical factors such as price can tend to be dampened (Pinnell 1994). A tell-tale symptom of this phenomenon is product simulations which allocate a larger share of preference to the most expensive and feature-rich products than would be expected (Pinnell 1994). A dual-conjoint approach can remedy the problem of underestimating the impact of price in large ACA studies. Methods to do this are described in the final section below. Strengths of CBC 1) Realistic Interview Including “None”: One of the most attractive features of Choice- based conjoint is the realistic nature of the respondent’s task. Respondents evaluate a set of products presented on the screen and indicate which they would purchase. This simple task mimics what people do virtually every day. As in the real world, respondents can decline to purchase in a CBC interview by choosing the “none” option. Knowing how often respondents choose “none” provides a general indication of the degree of interest in, or purchase likelihood of the product. Absolute volume or demand estimates should probably not be calculated based on “none” usage in choice studies since it has been shown that the “none” response is sometimes used when having two equally good products on the screen makes the decision hard, or as respondents become fatigued toward the end of longer choice interviews (Huber and Pinnell, 1994). 2) CBC Interviews Can Be Short: CBC interviews usually consist of 4 to 12 questions and last between 2 and 5 minutes. Since CBC estimates utilities by pooling choices over a group of respondents, it doesn’t require a great deal of information from any one individual. In fact, a choice study could include just one question per respondent. It is recommended to ask more than one choice question to thereby increase the precision of the estimates. Also, after completing the first question or two, respondents become more familiar with the demands of the task and can more efficiently answer subsequent CBC questions. After many questions (perhaps a dozen), the amount of additional information gained is likely not worth the additional fatigue on the respondent. 3) Ability To Measure All Two-Way Interactions: CBC allows the researcher to analyze all possible two-way (first order) interactions between attributes. This is a benefit of group-level analysis. Interactions are particularly useful in pricing research where brand- or feature-specific price sensitivities are often of concern. Markets tend to be more price sensitive for discount brands and less so for premium brands. With CBC, the researcher can estimate unique demand curves for all brands in the study and incorporate these interactions in the simulator. If strong interactions exist, including them in the analysis is the only way to ensure the predictive validity of conjoint choice models. Interactions are generally difficult for everyone but the most advanced practitioners to accommodate in traditional, full-profile fixed orthogonal designs. Specific interactions
must be anticipated (whether they exist or not) before the study is fielded, and most orthogonal plans designed for this purpose will only measure interactions between some of the attributes. CBC makes this process automatic. CBC creates randomized, near- orthogonal designs for each respondent, which when pooled over many respondents provide a statistically efficient design for estimating both main-effects and all possible two-way interactions. Limitations of CBC 1) Aggregate (Group-level) Data: Analysis for CBC is performed at the group level. Results are analyzed either using the “counting” technique, or with CBC’s built-in multinomial logit module. In most cases, there isn’t enough information provided at the individual level in CBC interviews to accurately determine any one respondent’s preferences or utilities. Utility estimates and product simulations can be done for groups of respondents, but the model must be re-estimated for each new group. CBC can merge segmentation data from other sources, including Sawtooth’s Ci3 System for Computer Interviewing. With these additional variables in the CBC data set, the researcher can do analysis by subgroups fairly quickly. It is sometimes possible to gain some degree of information at the individual level from CBC data for continuous variables such as price. Using a statistical package such as SPSS or SAS, the average price of chosen concepts could be calculated and used to segment respondents into low and high price sensitivity groups. This sort of analysis would only be advisable when many choice tasks have been asked (say, eight or more), and in the absence of more reliable individual-level price information such as ACA utilities. One complexity which arises using this approach surrounds what to do with respondents who answer “none” for most, or even all of the questions. It is difficult to know whether these respondents chose none because all price levels were too high, because their preferred brand was not included, or just due to a general lack of interest in the product concept itself. 2) Limited To Six Attributes: Psychologists have long recognized that people have difficulty processing more than about six pieces of information at the same time (Miller 1956). Full-profile conjoint studies often press the limits of how much information can be successfully evaluated before respondents either quit, glaze over, or start to employ sub-optimal shortcut methods for making choices. The number of attributes is an important factor in task complexity--but not the only one. Complex or wordy attribute levels and respondents’ familiarity with the attributes or product category also impact how effectively they can manage full-profile tasks. Indeed, it may be possible to construct a full-profile conjoint study with only six attributes that is too complex for some survey populations.
The Dual Conjoint Approach Dual conjoint strategies involve using CBC (or another full-profile technique) to measure the impact of price, and then “bridging” or otherwise calibrating the ACA model to reflect that pricing information. Combining ACA and CBC in the same interview can capture the benefits and overcome many of the limitations of both packages. ACA can measure a wide variety of features, while CBC can focus on the attributes most critical to pricing decisions, such as brand and price (Pinnell 1994). The approach of using both methods can successfully respond to a combined product design and pricing study which includes, say, 20 attributes and which also requires modeling brand-price interactions. All 20 attributes for the product design model would be included in ACA. For the pricing model, brand, price, and up to four other attributes (which are also included in ACA) would be measured in CBC. As an additional benefit, it has been suggested that ACA can provide valuable training which actually improves the results of subsequent full-profile conjoint exercises (Huber et al, 1991). If significant interactions between brand and price are not detected in CBC’s results, you may consider relying on the ACA model for both product design and pricing simulations. Before taking this step, test whether the importance of price is adequately represented in the ACA model, since ACA can underestimate the impact of price when price is included as just one of many attributes. This can be determined by configuring identical product simulation scenarios in both ACA and CBC for a set of products spanning the full range of price levels, and comparing the results. Product design studies that set out to find optimal product configurations can fail miserably if the importance of price has been artificially dampened. When this occurs, the “optimal” product usually ends up including the most features at the highest price, and the point of declining preference is never found. Calibrating, or weighting utilities for price in ACA models is one way to respond to this problem. Calibration can be accomplished in a number of ways ranging from the simple: estimating a single weight to apply to all ACA price utilities; to the more complex: calculating individual- level weights for ACA price utilities, or estimating an entirely new set of price utilities (or single price coefficient) to be incorporated into the ACA model. Although price has been identified as an attribute whose impact can be dampened in ACA, one should not assume that it is the only variable affected. The dual-conjoint approach might be used to treat other variables as well. As a final note, if you plan to use ACA utilities in pricing models (either in their original or in an adjusted form), you should specify equal, or near-equal, price increments when setting up price levels. ACA assumes equal increments in the utility information contributed by the priors, and unequal price increments may lead to artificial kinks in the resulting demand curves. You may alternatively choose logarithmic increments if you have reason to believe the true price function to be that shape. This caution only applies when using the rankings option. The ratings option offered in ACA 4.0 can be used with unequal price increments, since this information replaces assumptions about scaling in the priors section.
Summary Sawtooth’s ACA and CBC are both popular tools which have proven to be effective for conducting conjoint research. Neither is the most appropriate package for every situation. ACA and CBC have unique strengths and limitations which should be carefully evaluated before deciding which is the most appropriate technique. For many applications, many researchers find that the answer is not one or the other, but both.
References Finkbeiner, Carl T. and Pilar C. Lim, “Including Interactions in Conjoint Models,” Sawtooth Software Conference Proceedings, (1991). Huber, Joel and Jon Pinnell, “The Impact of Set Quality and Choice Difficulty on the Decision to Defer Purchase,” (March 1994). Huber, Joel, Dick Wittink and John Fiedler, Richard Miller, “An Empirical Comparison of ACA and Full Profile Judgements,” Sawtooth Software Conference Proceedings, (1991). Johnson, Richard M., “Comments On Studies Dealing With ACA Validity And Accuracy, With Suggestions For Future Research,” Sawtooth Software Working Paper, (May 1991). Johnson, Richard M. and Kathleen A. Olberts, “Using Conjoint Analysis in Pricing Studies: Is One Price Variable Enough?,” American Marketing Association Advanced Research Technique Forum Conference Proceedings, (1991). Louviere, Jordan J., “A Comparison of Rating and Choice Responses in Conjoint Tasks,” Sawtooth Software Conference Proceedings, (1988). Miller, George A., “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.” The Psychological Review, (March 1956). Pinnell, Jon, “Multistage Conjoint Methods to Measure Price Sensitivity,” Advanced Research Techniques Forum, Beaver Creek, Colorado (1994). Wittink, Dick, Marco Vriens and Wim Burhenne, “Commercial Use of Conjoint Analysis in Europe: Results and Critical Reflections,” (July 1992).
You can also read