Ranking Questions vs. Rating Questions

Posted by Guest Blogger

May 11, 2011 11:13:00 PM

ranking_and_rating

Sometimes I hear clients use the words ranking and rating interchangeably, even though there is a distinction. The difference is simple: a rating question asks you to compare different items using a common scale (e.g., "Please rate each of the following items on a scale of 1-10, where 1 is ‘not at all important’ and 10 is ‘very important’") while a ranking question asks you to compare different items directly to one another (e.g., "Please rank each of the following items in order of importance, from the #1 most important item through the #10 least important item").  Both types of questions have their strengths and weaknesses.

Ranking Questions

  • Pros
    • Guarantee that each item ranked has a unique value
  • Cons
    • Force respondents to differentiate between items that they may regard as equivalent
    • Emphasize items earlier in the list, which are more likely to be ranked highest
    • Return different results depending on the completeness of the list of items being ranked
    • Limit the range of statistical analysis available: they should not be analyzed as averages, as ranking questions do not measure the distance between two subsequent choices (which might be nearer or farther from each other than from other choices)
    • Can confuse respondents if numeric rating scales with 1 being the lowest rating are being used elsewhere in the questionnaire (though you should fully labeled scales instead on rating questions)
    • Take on average three times longer to answer than rating questions (Munson and McIntyre, 1979)
    • Mentally tax respondents, requiring them to compare multiple items against one another
    • Increase the difficulty of answering disproportionately as choices are added

Rating Questions

  • Pros
    • Commonly used and easily understood by respondents
    • Allow respondents to assign items the same
  • Cons
    • Often have a narrow distribution of ratings, which typically fall into an upper band (for instance, most items are considered important when using important scales)
    • Lead to less differentiation among items, with the possibility that a respondent rates every item identically
    • Accept great personal variations in response styles (e.g., respondents who never assign the highest rating)
    • Produce possibly spurious positive correlations due to individuals' personal variations
    • Matrix questions are tedious and lead to satisficing

(See "The measurement of values in surveys: A comparison of ratings and rankings" by Duane Alwin and Jon Krosnick for a more technical review.)

Mental Burden of Ranking Questions

The mental effort required to answer a rating question is linear: the same effort is involved per item. The mental effort for a rank-order question is almost exponential – N*(N-1)/2 – since each item has to be compared to every other item. Because the effort grows rapidly as more items are added, it is commonly advised to only use ranking questions when there are seven or fewer items to compare.

rating vs ranking

When to Use Which

Think about whether the items being asked about are expected to be very similar or very different from one another. For instance, when asking people to rate the importance of items about why they did business with your organization or why they purchased a product, many attributes are of similar importance, making a rating scale appropriate.  Alternatively, when asking people what features you should work on next, where you need to build a priority list for your development team, a ranking question may be more appropriate.

The literature is muddied on which approach is most reliable. Early Krosnick research ("Maximizing Questionnaire Quality", 1999) saw ranking questions as having greater predictive validity, but a number of studies since, include his own later research, show rating questions as having greater validity (Krosnick, Thomas, and Shaeffer, 2003; Maio, Roese, Seligman, Katz, 1996). Sometimes you will want to consider alternatives such as MaxDiff or constant-sum questions. No one approach is perfect.

Topics: Survey Research

Share this content!

    

Search Verint

Select an RSS Feed

Subscribe to Email Updates

Sort Posts by Author

See All