by Dominic Bria, Psy.D., MBA of the Shingo Institute
Employee surveys can be useful tools that show organizations where gaps exist between employee perceptions and those of managers and leadership. There are several kinds of employee surveys available to leaders who want to measure various attitudes and perceptions their employees might hold. There are surveys that measure employee engagement, job satisfaction, symptoms of job burnout, perceptions of corporate citizenship, and others. It’s also common for companies to try to craft employee surveys of their own. Sometimes they are meant as a less expensive alternative to pre-made surveys, other times they are meant to measure elements that may not be measured by existing surveys. In either case, survey instruments—the group of questions for survey participants to answer—are tricky things to design.
If not worded, ordered and sized correctly, a survey can invite bias and trigger other behaviors in both participants and in researchers that can skew results or even yield information that is entirely false. At the Shingo Institute, it took many months to design the survey instrument used for the Shingo Insight™ assessment. We designed this survey to measure employee perceptions of the behaviors they see that indicate the presence or absence of Shingo Guiding Principles. During the long process of researching and consulting to find the best way to craft Shingo Insight, we amassed a large amount of good information about what biases and other behaviors to safeguard against in our survey design.
If you are considering designing your own survey, I applaud your ambition. Let me share with you just a few of the pitfalls to avoid as you design your survey.
Some surveys are designed to be administered by an interviewer. There are several types of interviewer bias but one of the most common is called confirmation bias. It can occur when an interviewer is familiar with the interviewee and interprets what the interviewer says based upon that perceived familiarity (Davis, Couper & Janz, 2009). For example, consider an interviewer who works in the HR department and has heard a certain employee express negative attitudes in the past on a topic examined by the survey. The interviewer may assume that the employee’s comments during the interview are more negative than the employee really feels about it based upon those past comments.
Recall bias can happen when an employee remembers the topic of a question in an undeservedly negative light because of something bad but unrelated that may have happened at nearly the same time as the topic in question. For example, a procedure change is enacted in an employee’s duties very near the same time the employee was reprimanded for coming to work late. The employee may remember the effect of the procedure change negatively and respond to questions about it accordingly because it happened in the same time frame as the reprimand. Even though the two events were unrelated, in the employee’s mind they may run together. It’s usually not a major problem but it certainly can be if the effect is widespread. This effect can be mitigated somewhat in the design and order of the survey questions.
Response bias is what happens when only certain types of employees participate in a survey (Creswell, 2013). It can also occur when certain group of employees leave a certain question blank while another group responds to it. This type of division can produce misleading data.
This type of bias is a fault that happens on the part of survey administrators when they choose to only survey employees from certain departments or sections rather than getting an even sampling of employees from all departments or sections. If it’s impossible to survey the entire workforce of the facility, it’s important to get an even sampling of all employee types.
Sometimes employees are afraid to say negative things about their employer and will skew their answers accordingly. This is closely related to another type of bias called acquiescence bias, also known as friendliness bias. The respondent is given to “yea-saying” and will answer all questions in a positive or mostly-positive way even when they may not really feel that way. This type of bias should not be confused with another phenomenon called satisficing behavior. Satisficing behavior is what happens when a survey respondent marks questions the same all the way through the survey, or perhaps randomly without reading any of the survey questions. Usually the purpose of this type of behavior is to finish the survey as quickly as possible, whether because the employee feels the need to get back to work as quickly as possible or whether he/she simply doesn’t see any value in participation.
Habituation bias can sometimes be confused with satisficing behavior but it’s actually a biological response. It can happen when several questions in a row are worded very similarly (Dodou & Winter, 2014). Responses to such a barrage can become automatic and therefore inaccurate.
Social Desirability Bias
Perhaps one of the better-known biases to avoid, social desirability bias can happen when the respondent believes that answering questions in a certain way will be more popular with co-workers in later conversations. It can also happen when the respondent perceives the topic of the question to be sensitive or uncomfortable. The answer given may be swayed by a desire to avoid embarrassment. No matter how strongly the respondent is guaranteed anonymity, social desirability bias can still happen.
In addition to avoiding the biases listed above and other biases not mentioned here, care must be taken to design questions in ways that won’t artificially sway the opinion of the survey participant. Without such care, it’s easy to ask questions that may be leading, loaded, double-barreled or unclear.
Surveys can help leaders make evidence-based decisions for their organizations if designed and administered properly. However, if designed and/or administered improperly, they can cause leaders to make decisions based on erroneous data.
Creswell, J.W. (2013). Research design: qualitative, quantitative, and mixed methods approaches. Sage Publications.
Davis, R.E., Couper, M.P., Janz, N.K. (2009). Interviewer effects in public health surveys. Health Education Res.
Dodou, D., Winter, J.C.F. (2014). Social desirability is the same in offline, online and paper surveys: A meta-analysis. Computers in Human Behavior. 36:487-495.