Customer Engagement

The Use and "Abuse" of Survey Rating Scales

One of the best aspects of my role as a sales engineer is encountering all the different organizations and business professionals who discuss their survey research projects with me. We often begin our discussions with thoughts on how to best gather the important information they need to help make difficult business decisions. One topic that comes up frequently is survey rating scales.

While an examination of survey rating scales borders on tedious, the correct use of rating scales is critically important – and I believe that scale “abuse” is rampant. Okay, maybe abuse is too strong of a word, but inappropriate use of rating scales can have significant impact on results and ensuing business decisions.

One organization with whom I spoke wanted to start internal benchmarking of departments and divisions. When they reviewed their recently-collected survey data, they discovered that of the 29 customer satisfaction questions they analyzed they had used nine different rating scales!

In another company’s customer satisfaction study, major accounts were asked to rate key service attributes on a scale of 1 to 4. But, the end points for the question were never labeled. In reading the respondent’s open-ended comments it was clear that some of them interpreted the scale as a rank from 1 to 4 in importance, while other respondents interpreted 4 as excellent and 1 as poor. The resulting data was completely unusable.

Ignoring the importance of rating scales, or using poorly designed scales, can seriously impact your ability to gather the information you need make the right decisions for your business. Many do-it-yourself researchers ignore established techniques about what types of rating scales will produce the most accurate results. Instead, they copy scales used in other questionnaires created by their organization, or worse, make up their own arbitrary scales. Often, I have seen a single questionnaire mix and match several different approaches.

Current thinking, scientific research, and my own experience working closely with Vovici customers have yielded the following best practices concerning survey rating scales:

  • Use 5-point scales when rating against one attribute (unipolar scales, for example: “Not at all satisfied” through “Completely satisfied”)
  • Use 7-point scales when rating against polar opposites (bipolar scales, for example: “Extremely likely to recommend against” through “Extremely likely to recommend”)
  • Use unipolar scales instead of bipolar scales wherever possible, as such scales are shorter and less confusing to respondents
  • Use fully labeled scales without showing respondents any numeric ratings – such scales are preferred by respondents and have higher reliability and predictive validity than numeric scales
  • List rating scales with the most negative item first, to prevent order-effect bias from inflating your ratings
  • Use common scales whenever possible, rather than writing your own scales
  • If you do choose to write your own scales, follow one or two common patterns when framing your choices
  • Develop guidelines as to the common scales to use across your organization and your research, so that you can compare the results from study to study and from department to department

In future posts, I will elaborate on several additional best practices, including using bipolar and unipolar scales, the case for 5-point vs. 7-point scales, rating scale labels, custom rating scales, listing negative choices first, and “no opinion” as a question choice – each of these practices has its own set of points that are useful to discuss. You conduct survey research to gather important information that impacts business decisions – the last thing you want to do is ignore the impact of survey rating scales and find out you have hidden the truth of your business environment.

Subscribe to Email Updates