SurveyMonkey

Feature Importance (MaxDiff)

ADD-ON FEATURE: You can use this solution with our Audience collector on any SurveyMonkey plan type. This solution is available for US Data Center accounts only. Get started now, or explore our other market research solutions.

The Web Link collector is only available on Premier plans and higher.

The MaxDiff solution helps you understand what items or features matter most to your audience. This is helpful when you’re prioritizing new products or features and want to know what your customers care about most. 

The solution is based on MaxDiff methodology. We create your survey questions using the items you enter. You don’t have to create each MaxDiff question yourself – you enter the items you want to include in your survey, and we create the questions.

The MaxDiff solution builds your survey questions using the items you enter. You don’t have to create each MaxDiff question yourself – you enter the items you want to include in your survey, and we create the questions.

In a MaxDiff study, respondents should see each item about the same number of times as the other items. SurveyMonkey’s MaxDiff solution creates 30 versions of the survey and distributes them randomly to survey takers to achieve the most bias-free results. However, you can choose how many items are included in each set and how many questions each respondent sees.

We consider the following metrics when creating the different versions of your survey. These metrics ensure that your items are represented fairly to reduce bias in your results.

  • One-way Frequencies: Each item should ideally appear an equal number of times to the respondent as they take the survey.
  • Two-way Frequencies: Each item should ideally appear in the same set with each other item an equal number of times.
  • Connectivity: Each item should directly or indirectly be shown with every other item across the 30 versions of the survey.
  • Within-set positional balancing: Each item should ideally appear an equal number of times in the top, middle, and bottom of the question when shown to respondents.
  • Across-set positional balancing: The same item shouldn’t be shown in successive or nearby questions.

MaxDiff is not a predictive model. It uses algorithmic predictions to estimate outcomes based on multiple-choice scenarios. We recommend that you provide your own notices about these tools to your respondents.

Results from your study are retained for 3 years. After 3 years, the results are deleted.

To set up your study:

  1. Create a new MaxDiff study.
  2. Replace Untitled study with a study title on the Get Started page. Review what to expect in setup and after launching your study. 
  3. Select Next: Set up MaxDiff.

To add survey components:

  1. Add an introduction to let survey takers know exactly what you want them to do while completing your survey.
  2. Add MaxDiff question text to tell survey takers how to respond to your MaxDiff question. This text goes above your MaxDiff question. Make sure people know to choose one best and one worst from each set of items.
    • Choose the label to use for the “Best” and “Worst” options in your MaxDiff questions. “Best” and “Worst” are selected by default. You can also create your own labels.
  3. Add items to MaxDiff to use in your questions. You can add items one at a time or import items in bulk.
    • Select Add an item to enter a new item. Add a label and image if you want.
      The Item description is what survey takers will see.
    • Select Import items to use a list of items. Preview the items on the right. When you’re done, select Import.
  4. Edit items per set, totals sets. This option determines how many items to include in each question and how many question sets you want survey takers to see. When you’re done, you can Preview MaxDiff Question below.
    • Items per set: The number of items shown in each MaxDiff question set. Add at least 5 items.
    • Sets per respondent: The number of MaxDiff question sets each survey taker will see. Read the help text above the Sets per respondent field; it changes based on the number of items you add.
    • Image scale: The size of the image you’ve added to your items. The scale applies to all images. View the image in the Preview MaxDiff Question section.
    • Preview MaxDiff Question: See what your questions will look like in the survey. Select Single set to see what a single question will look like. Select All items to preview all items together in one set.
    • Select View report to see the experimental design report. This report tells you how we balance your study to avoid biased results. To save a copy of this report, select Download report (.csv).
  5. Select Next: Add custom questions.

You can add additional custom questions to your survey to gather other important information from your target audience. You can:

  • Use the Question Bank to add pre-written questions, if you want. 
  • Implement logic for the questions you add. 
  • Brand your survey.

MaxDiff questions are locked to protect our proven methodology and can’t be edited. You can preview them but can’t change them.

Select Preview survey from the top right corner to test your survey in a new window and see what it will look like to respondents. You can even share the preview with others to gather feedback.

Select Next: Collect Responses when you’re ready to send your study.

When you’re ready to send your survey, select the Collect icon from the left-side menu.

The Collect icon is the second icon down in the left sidebar.

There are two ways to collect responses:

  • Share a survey link: Create a web link to send your survey to your own audience. This feature is only available on certain plans.
  • Target your ideal respondents: Buy responses from certain demographics using the SurveyMonkey Audience panel. This feature is available on all plans.

You can create multiple collectors for your MaxDiff study.

PAID FEATURE: Web Link collector for the Feature Importance (MaxDiff) solution is only available on Premier plans and higher.

Create a web link that you can send your way. To share a survey link:

  1. On the Collect page, select Add new collector, then select Web link collector.
  2. Customize your Web link settings. You can:
  3. Copy your web link when you’re ready to send your survey.

SurveyMonkey Audience panelists have been sorted based on hundreds of targeting options so you can target your respondents based on country, demographics, employment status, hobbies, religion, and more.

To choose your target audience:

  1. On the Collect page, select Add new collector, then select Buy Responses.
  2. Select your target audience by choosing Country, Gender, Age, and Income criteria.
  3. Select More Targeting Options to browse and choose from hundreds of other targeting options.
  4. Choose how many complete responses you need. We’ll provide a recommendation based on the number of items in your study.
  5. (Optional) Select whether to add a screening question. Screening questions help you narrow down your target audience and disqualify people who aren't the best fit for your survey.
  6. If you add a screening question, estimate how many people you expect to qualify for your survey.
    • If you choose to include a screening question in your survey, open the Customize survey page in a new tab.
    • Add a qualifying question to the beginning of your survey. We recommend either Multiple Choice or Checkboxes.
    • Add skip logic that disqualifies people if they select certain answer choices.
    • Return to the Target audience setup page to estimate your Qualification Rate.
  7. Review your target audience summary.
  8. When you’re ready, select Checkout.
  9. Review the details of your order.
  10. Under Payment Method, select to pay with Credit or Debit Card, or with My Credits.
  11. Enter your Billing Details, review the total, and select Confirm.

Once you submit payment, we'll start gathering responses for your survey right away.

Select the chart icon on the left side of the screen to start analyzing your results. In the Analyze section, there’s an Overview, Counts, Empirical Bayes, Survey results, and Individual Responses.

  • Overview: Track your Audience project's status, view how many responses are collected, and see a summary of your targeting criteria.
  • Counts: Get a simple view of how often items were chosen as Best or Worst.
  • Empirical Bayes: Learn how each item performed compared to others.
  • Survey results: View charts and data for your custom questions. 
  • Individual responses: View each survey taker’s response.

Counts analysis shows you how often items were chosen as Best or Worst. This data helps you quickly understand how survey takers rated each item. You can view a few different data sets in the chart: Simple counts, Best counts, Worst counts, or Best and Worst counts.

  • Simple counts: Total Best ratings for an item minus Worst ratings.
  • Best counts: How many times an item was chosen as Best. 
  • Worst counts: How many times an item was chosen as Worst.
  • Best and Worst counts: Compare how many times an item was chosen as Best and Worst. 

View all data for each item in the table below your chart. The table also includes Count proportion, which is the quantity of Best or Worst counts divided by the number of times people saw the item.

Empirical Bayes shows how survey takers feel about each item. It estimates each item's likelihood of being chosen as Best, relative to other items. 

Empirical Bayes calculates a utility score for each item, which is a measure of how well an item performed. We calculate this score across all sets and survey takers. We use the following data to calculate utility score for each item:

  • Number of times chosen as Best
  • Number of times chosen as Worst 
  • Number of times shown

​​First, we combine all responses to find the utility score for each item. Then, we calculate an item's utility value for each respondent. If someone didn't see all items, we use Bayesian Pooling (or "Shrinkage") to estimate how someone would have responded to the item. We assume that they would answer similarly to the aggregate score, since it combines all responses. We "shrink" the survey taker's utility value toward the aggregate score for the item they didn't see. These scores help us estimate how likely it is that an item will be chosen as Best.

The chart shows the utility score for each item. A higher score means that an item is more likely to be chosen as Best and may be more important to your target audience.

Select a scale in the top right corner to change how you view the data:

  • Probability-scaled: Shows the likelihood that people will choose an item as Best. We show each utility score as a percentage. The highest scoring item is most likely to be chosen as best. The scores add up to 100. 
  • Zero-centered: Adjusts the scale to use 0 as the average score. A positive score means the item performed better than average. A negative score means the item performed worse than average.
  • Raw scores: View the raw utility scores for each item. Raw scores reflect the actual choices people made during the survey. Raw scores are not probabilities.

The table below the chart shows each item’s score and the 95% confidence interval. The confidence interval is a range of scores that contains the score we'd see a certain percentage of times if we did the survey over and over. For example, an item has a 95% confidence interval of [13–14]. In other words, the item would score between 13 and 14 in 95 out of 100 repetitions of the survey.

On any page, select the Filters button above the chart to filter your data.  Any filters you apply to one chart will also apply to others. For example, if you add an Age filter to your Counts analysis, we’ll also apply it to your Empirical Bayes analysis.

You can export Counts data, aggregated Empirical Bayes data, or full response data.

To export your Counts or Empirical Bayes data:

  1. Select Export.
  2. Choose the data type you want to export: either Counts or Empirical Bayes Aggregate.
  3. Name your file.
  4. Select Export.

The export appears at the bottom of your browser window or is available from your computer's Downloads folder.

To export full response data:

  1. Select Export.
  2. Choose Full Response Data.
  3. Choose your file format: XLSX, CSV, or SPSS.
  4. Choose to export either the Current view or the Original view (with no filters applied).
  5. If you're exporting an XLSX or CSV file, choose the format for your columns, then choose the data to show in each cell.
  6. Name your file.
  7. Select Export.

The export appears at the bottom of your browser window or is available from your computer's Downloads folder.