There is a clear link between feedback and success. Feedback is central to everything we do.
Opportunities for feedback
There are unlimited opportunities for leaders, teams and organizations to get feedback. When asked, most leaders mention customer surveys and employee surveys. But they often don’t think about many other valuable purposes.
Our advice is to take a disciplined approach to feedback and to consider it an essential tool to engage and align teams.
You need a structured process to ensure success with your survey project.
Goals and objectives are essential to guide survey design. Resist the urge to dive head-first into question writing.
Establish a Steering Committee or talk to a subject matter expert!
Depending on the type of survey you are designing, it can be helpful to talk to a subject matter expert or involve a Steering Committee before you do your own research.
Field experts understand the subtlety in a topic area and the things that can look important but are ultimately distracting. Talking to someone in the field can save time and help focus research into the right areas.
A benefit of talking to a subject-matter expert or a Steering Committee is that they are likely to know the pitfalls with certain types of questions and what to expect. They can help you to shape your objectives into relevant questions for the survey.
Get a survey expert to help
Even though you and your team may be capable of creating thought-provoking questions, it can be beneficial to talk to an experienced researcher or consultant. Someone with strong survey-building experience can help you clarify the type of information you are seeking and avoid pitfalls. Bringing in a third party adds an objective pair of eyes to the process. They will help make the objectives more specific and identify where more information may be needed.
- Write down your survey goals
- Identify the specific and measurable survey objectives
- Get feedback from survey and subject-matter experts or a Steering Committee
- Research the topic from end to end
- Create the survey questions and test them
- Make sure you conduct a pilot
Clear polices are important to guide those who manage the survey project and protect the respondents.
Survey policies guide actions
Policies may be necessary to guide internal actions as well as to give comfort to the respondents. Most important is that the policies are clearly communicated to the relevant people in the project. Such communication may need to be explicit, or it may be implicit, depending upon the nature of the policy and its target audience.
Make sure the policies are followed
It is imperative to ensure that the survey policies are communicated and adhered to by those people who have access to the survey project results. Anonymity policies that are communicated to survey respondents give them comfort about who will be able to see their responses. It is imperative to ensure that the survey policies are followed.
Start at the high level before getting to the detailed survey design. Begin the design process with clear outcomes in mind.
Use titles, headings and instructions to guide respondents clearly
Headings and instructions provide a sense of structure and make the survey easier for respondents to follow when they complete the survey. They can also make it easier for those who analyze and interpret the data.
Use page breaks appropriately
Page breaks can serve several purposes. They are a way to ensure that survey responses are saved if a person is interrupted when completing the survey. They can also be used to focus attention on a single question at a time. And, they can be used to fit with the structure and headings or instructions in a survey. For example, they allow headings and instructions to be placed at the top of a page and can be varied from one page to the next.
When you have multiple free text questions, we recommend using page breaks more often to avoid an issue when someone writes a large amount and then loses their work for varying reasons.
Be careful not to use too many page breaks. On longer surveys they will be very frustrating for the respondents, particularly if they are using a mobile device.
Designing great questions
After considering the bigger picture goals and objectives, start designing the survey itself. The survey questions you create must be designed to achieve your survey goals and objectives. They shouldn’t ask anything that is irrelevant. Here are some guidelines for writing good questions:
- Keep the questions short
- Use simple to understand language
- Be specific in what you ask
- Phrase questions in a direct way
- Keep all questions relevant
- Test the questions by asking whether they will achieve the survey objectives
These guidelines will help you to stick to the point, and get only the data that supports your goals and objectives.
Select the right question types
There are different question types you can use. Using the right type is important to get valid data.
Quantitative questions are directly measurable. This means that you set up a list of answers and your respondents will choose from those possible answers. These questions will give you clean reports, easy-to-analyse charts, and will help you identify patterns and trends.
Qualitative questions let respondents answer in their own words. Even though they can be more difficult and time consuming to analyse, qualitative questions provide deeper insight into how your respondents are thinking.
To get the best results, use a combination of quantitative and qualitative survey questions. If asking qualitative questions, don’t ask them up-front. Get buy-in from your respondents early with easy quantitative questions and leave the free text questions to later.
Think about your reporting early
It is a common mistake to design and conduct surveys without considering the reporting needs. Consequently, reports can then be difficult to pull together. Avoid this by thinking about your reporting during the design stage. A benefit of this is being able to include custom values that you will see in your reports, but which are hidden to the survey respondents. You might need to do this when reporting specific values (which don’t mean anything to your respondents), or you have to analyse data in a particular way.
When you set your reporting values, go back to the purpose of your survey. Think about the values that are going to allow you to achieve the goals and objectives, and that will fit the purpose. That way, you will design questions that give you effective and useful reports.
Well-designed surveys = valuable data
Pay careful attention to all elements of the design to avoid mistakes, improve response rates, ensure quality data, and help produce valuable reports.
There are pros and cons to each type of survey question.
Closed questions ask respondents to choose from a specified set of possible answers. This type of question includes rating scales, demographic questions, multiple choice and forced choice. Use closed questions to get quantified results. Closed questions allow easier comparison and analysis of results. They are also easier to present and discuss. Closed questions are typically quick and relatively simple for respondents to answer.
Multiple choice (single answer) questions
Multiple Choice with single answers limit the respondents to selecting one response. Among other purposes, these can be useful for asking demographic questions.
Be careful to consider all options. Respondents can be frustrated if the response they want to give is not provided.
Multiple choice (single answer) questions
Multiple choice with multiple answers allow respondents to select multiple responses. The minimum and maximum number of required responses can be set. For example, you may allow respondents to select all responses, or you may require them to select a specific number or a range (like between 2 and 3). Again, remember to consider all options to avoid respondent frustration.
Rating scale questions
Rating Scales provide a common set of scale options for respondents to answer survey questions. Using the same rating scale allows comparison of responses across multiple questions in the survey. Rating scales help to discover varying degrees of opinion. They are typically quick and easy for respondents to answer.
The underlying principles for developing a rating scale are:
- The meaning of each scale item should be easy to interpret
- As much as practical, each scale point should have the same meaning to all respondents
- There should be enough scale points to differentiate respondent opinions
- The scale responses should be reliable. If the same question was asked again, at the same point in time, the respondents should provide the same answer.
- The points in the scale should be consistent with the primary principle of the scale (i.e. don’t mix up unrelated terminology)
- Typically, we recommend having a response option for those respondents who do not have enough information to answer (e.g. Don’t know)
There are two key questions about rating scales.
- How many options to include in the scale?
In most situations, five or seven point scales work best and for reliability. Typically, we prefer five point scales unless questions specifically require greater differentiation. Five point scales provide valid data and are easier for respondents to complete.
There are some exceptions, as in the case of a Net Promoter Score survey which has a scale of zero (Not at all likely) to 10 (Extremely likely) where the points in between are not labeled.
- How should the response options be labelled?
Our recommendation is to label each response option with words that clearly define what each point means. Words are better than just numbers for several reasons. For example, if you provide a range of 1 to 5 without labels, then what does each number mean to the respondent. Additionally, labelling the first and last number in a scale, without labelling the numbers in between, still creates the issue of what the middle numbers mean.
Avoid complex scales
We also recommend avoiding overly long or complex response options. The objective is to allow respondents to answer in a way that differentiates without providing too many points where the scale becomes difficult to answer or overly complex.
Consider a balanced scale
Consider using a Balanced Scale which gives respondents an equal number of response options around a midpoint. Balanced scales allow respondents to select a neutral response rather than forcing responses that do not match how they feel. An example would be:
- Don’t know
- Strongly disagree
- Strongly agree
Provide an option for respondents who do not have enough information to answer
We recommend providing a scale option for those who may not be able to answer. Examples include “Don’t know” or “Not applicable” or “Prefer not to answer”. Perhaps the respondent has had no experience with the specific question being asked. Don’t force them into making a choice without relevant information.
Tips for creating rating scales
- Mostly, five point scales will be suitable
- Label each scale item
- Provide a Don’t Know option
- Number response options from low to high (e.g. Strongly disagree = 1 and Strongly agree = 5)
- Use odd numbers and create a balanced midpoint
- Space the scale options/labels as evenly as possible to cover the full range intended
- Do not use too many scales. A single scale can work well, be easy for respondents and be easier to compare and analyze.
Use Yes/No questions sparingly and only when you need an absolute response or are qualifying respondents.
Optional versus Required responses
Our recommendation is to make questions optional in most cases. That way you avoid forcing people to make responses and answer questions that they really don’t want to. In such cases, they may not provide valid responses.
Design the survey questions to achieve the objectives. Avoid anything that is irrelevant.
Select the right question types
Quantitative questions are directly measurable with a list of answers for respondents to choose from. They give clean reports, easy-to-analyse charts, and help to identify patterns and trends.
Qualitative questions let respondents tell you the answer in their own words. Even though they can be more difficult to analyse, qualitative questions will show exactly how respondents are thinking.
To get the best results, use a combination of quantitative and qualitative survey questions. If you ask qualitative questions, don’t ask them up-front. Get buy-in from your respondents early with easy quantitative questions and leave the free text questions to later.
A survey writing check list
- Think about the respondents and use appropriate language
- Make your questions clear and as short as possible
- Personalise the language where possible
- Ensure that your question wording matches the scale and response options you choose
- Make sure to use a time frame if important
- Avoid leading questions
- Avoid biased questions
- Avoid double barreled questions
- Ensure the response options are balanced
- Don’t ask overly complex questions
- Don’t make your questions too broad
- Test your questions against the objectives
Conditional Logic lets you create dynamic surveys that change what a survey respondent sees and what happens based on their responses.
Actions are what you want to happen when responses meet defined Conditions and Triggers. Actions include:
- Hide selected content, e.g. questions or instructions (often called skip logic)
- Finish the survey
- Redirect to a specified url on finish
- Change the finish message (HTML options are available)
- Append text to the finish message (HTML options are available)
- Send an email or notification when the survey is complete
- Tag the participant when the survey is complete
Triggers are defined responses to survey questions. Each trigger has answer options to check against. These options vary depending upon the question type. Triggers include:
- Is equal to/Is not equal to:Does a strict comparison with the answer. The answer supplied must exactly equal the selection. Multiple choice answers must have all and only the answers specified in the logic to fire. This option is not available for free text questions.
- Contains/Does not contain:This option is only available for free text responses. It checks to see if a response contains a word or sentence.
- Is one of/Is not one of:This is a non-strict equal to comparison and checks if answers contain certain responses.
- Is more than/Is less than:A comparator that is only available for Rating Scales questions. It checks to see if the answer is higher or lower than specified.
- Is more than or equal to/Is less than or equal to:The same as more than/less than but it includes the specified value.
Conditions contain one or multiple Triggers that, if met, will result in the Action. Conditions can be set to meet “any” or “all” triggers.
Integrate the survey with third party applications to automate processes and create value.
Pilot the survey pilot to validate the questions and the effectiveness of the messages.
Choosing the pilot group
At a minimum, first pilot the survey yourself. Then, consider piloting with others (e.g. members of the survey project team). Beyond that, identify people from the target audience and engage them in the pilot. The number and nature of people to involve in the pilot will depend upon a range of factors, including the goals and objectives of the survey, the size of the target audience, the diversity of the target audience, and the availability of pilot participants.
Two types of pilot are Participatory and Undeclared
In a Participatory Pilot the participants are aware that they are participating in a pilot. The pilot participants are clearly informed that they are participating in a survey pilot. This type of pilot is useful when feedback is needed from the target audience about the survey content, processes and messages.
In an Undeclared Pilot, the participants are not aware that they are part of a pilot group. The survey is issued as if the survey were real. This type of pilot is useful to ensure there are no issues with survey completion or to review the results received.
Be clear about what is expected of the pilot group
What you communicate to the pilot group is likely to depend upon whether the pilot group are aware that they are participating in a pilot or not.
In a Participatory pilot, engage pilot participants in such a way that makes them want to be involved and to help. The pilot participants need to be aware of what is expected in the pilot process and how their pilot data will be used. More than likely, the survey will be refined after the pilot, so the responses from the pilot group will need to be deleted. The pilot participants need to be aware of this and that they will be asked to complete the survey again when it goes live.
These pilot participants need clear guidelines about 1) what they are to look for and 2) how they will provide feedback about the survey. E.g. can they provide their feedback in the live pilot survey? Do they have to take notes and send them separately? Is an online collaboration tool being used to collate pilot feedback? When are they expected to provide feedback?
In an Undeclared pilot, conduct a survey with a small live group of target respondents without them being aware that they are participating in a pilot.
Refine the survey and the communications
After the pilot, review the feedback and refine the survey, instructions, key messages and the communications. Depending upon the pilot, it may be necessary to conduct a follow up pilot, particularly if major issues are identified.
Once satisfied with the survey, it is important to get sign-off by the project key stakeholder.
Avoid these common survey mistakes.
Fail to establish and communicate clear survey policies
Clearly defined and communicated survey polices are essential in a survey project. Policies guide the actions and decisions of those who manage the project and protect the respondents. Policies should consider things like anonymity, how the data will be used and who will have access to the data.
Fail to pilot the survey
Always conduct a survey pilot. A pilot is an opportunity to get feedback on the messages communicated, instructions, survey questions and the scales used, along with the survey structure and usability on different devices. Based on the pilot feedback, refine the survey and the communications to address any issues identified.
Send the survey to a large group without starting with a small group
Wherever possible, don’t send the survey to a large group of respondents without first sending it to a smaller group. Confirm that everything is fine with the smaller group before sending to the large group.
If there is an answer that you’re hoping to get, you can bias your questions to get it. Leading questions is the easiest way to do this. An example would be, “How happy are you with our fantastic service?” In that question, you tell the respondent that the service is fantastic. A better approach would be to ask about the service and then use a rating scale that allows the respondent to assess the service level.
Create survey fatigue and tire out your respondents
Survey fatigue is common. Ways to do this include:
- Ask too many questions
- Put a lot of questions on every page
- Ask long questions
- Ask confusing questions
- Ask overly complex questions
Piloting the survey will give you useful information as to whether the survey is tiring.
When respondents don’t understand survey questions, it can make the survey data worthless. Don’t use complicated or ambiguous language that is full of jargon.
Assume respondents know more than they do
Don’t leave room for ambiguity and don’t rely on prior knowledge. Also, don’t ask respondents to remember things from some time ago.
Ask two questions in one
As much as possible, make sure the questions contain mutually exclusive ideas. This is not always possible and needs to be balanced against the survey length.
Ask too many open-ended questions
Choose open ended questions wisely and don’t over-use them. Open-ended questions are time consuming. They involve much work analyzing the responses. So, be prudent in asking them.
Lack of attention to detail
Pay attention to the detail in the survey. Poor grammar and spelling errors are unprofessional and send a message about the importance of the survey. Minute details in grammar can also affect the validity of the data.
Inadequate response options
Make sure that the response options cover the needs of the target group of respondents. It can be very frustrating to be forced to select and not have a relevant choice.
Use too many different rating scales
Using many different scales has two draw backs. First, it becomes difficult and time consuming for respondents to answer the survey. Second, interpreting the results becomes difficult. A standard rating scale allows easier comparison of responses across questions.
Use too many response options
In multiple response questions, don’t have too many response options. Choosing answers from long lists is difficult and can lead to respondents not taking the time to respond accurately. So, keep the response options reasonable.
Use the wrong page structure for the type of survey and the audience
Carefully consider the length of the survey, the type of audience and the device that the respondents are likely to be using when they complete the survey. Then design the structure to suit. For example, long surveys can be very frustrating if only one question is placed on each page. On the other hand, placing multiple open ended questions on one page creates the risk of respondents losing what they have entered if they are interrupted and they have not saved their responses.
Force responses to every question
Do not overuse the mandatory questions. Use them sparingly. You may be forcing people into a selection that they don’t want to make, thereby invalidating the data. Forcing responses can be very frustrating. And, if someone does not really want to answer a question then they may just choose any response so they can move on.
Not labelling rating scale steps
We prefer not to use long rating scales like a 10-point scale, except for Net Promoter Score. Where possible, each point in a rating scale should be labelled to give meaning (albeit subjective) to the scale points. A large scale can be frustrating for respondents. And, when there are no labels for options then we question the point. Five point scales and seven point scales are preferable. Not labelling rating scales is a very common mistake.
Fail to provide a ‘Not Applicable’ or ‘Don’t Know’ option
Consider the audience and the questions being asked. In many cases, you should include an option for the respondent if they do not have experience with the question being asked. It is often unreasonable to expect that every respondent will have relevant experience to respond to every question in the survey. So, consider including a Don’t Know or Not Applicable option in the rating scale.
Select deployment methods relevant to the audience and the survey objectives.
Some tips for emailing surveys
- Make the ‘From’ name easily recognisable. People like communications, not ‘blasts’.
- Subject lines are important. Keep subject lines enticing and short.
- Spam filters will catch words like important message, offer, and free. They are also more likely to catch subject lines with the recipient’s name in it.
- Avoid dollar signs, exclamation points, and all-capitals.
- There are free online tools to test your subject for best results.
- Personalise an email whenever possible! Personalisation can have a big impact on your response rates.
- Thank people in advance, and assure them about the confidentiality or anonymity of their responses
- Tell them why you’re doing the survey
- Tell them why participating is beneficial for them
- Tell them how long the survey will take (and be accurate – run it yourself in some tests to find out)
- Include a deadline, so there is a sense of urgency
- Let recipients know that they’re part of a select group who have been invited to participate
- Consider offering incentives
The idea is to make the entire exercise as unsurprising as possible. When people know what’s going to happen, they feel safe and comfortable, and that makes your survey feel easy to complete.
You will need to send more than one email
The email invitation is just one in a series of communications that you will need to design. Typically, several reminders will be needed to maximise the response rate. Be friendly. Recognise that people have busy lives and demonstrate empathy. Be clear, and immediately action any opt-out requests.
Where practical or relevant offer to share the survey results to those who took it.
Don’t be a spammer
Make sure that the people who get your emails are aware of your survey or have agreed to receive emails or offers from you. Unexpected email communications are far more likely to result in spam reports, or deletion.
Autoresponders are used to store lists of people and send automated emails. They provide a way to deploy surveys automatically and for the survey invitation to be personalized.
Post on websites, Intranets and social media
Websites and Social Media provide important forums to conduct surveys and get feedback. Be aware that public networks are open and difficult to control who sees the survey and responds to it. However, it is increasingly possible (with private groups and demographic information available) to manage where and to whom the survey displays.
Engage participants and motivate them to complete the survey.
The target audience will influence the response rate
In general, there are two basic audiences for any type of survey: Internal and External.
An internal audience is typically made up of employees or people that belong to an organization or group. Surveys sent to internal audiences tend to have much higher response rates compared to those sent to external audiences while external surveys tend to have much less engagement.
Let’s say a company issues an internal survey to its workers to learn what their challenges are. In this instance, it’s easy to see how the employees may be eager to provide this feedback. It’s a chance for workers to tell management what can be improved. Also, they may benefit by helping the company.
Getting an external audience, like customers or suppliers, to respond to surveys is typically more difficult. Their motivation is likely to be lower than an internal audience. External audiences may not see the benefit in completing a survey. Even when targeting specific groups of customers (such as those who made a recent purchase) and offering a reward for their participation, you still might not receive strong response rates.
Here are some ways to help achieve your target response rate.
Be clear about the value
When people understand how their survey responses will be used, they are more likely to give their time to help. Communicating this is very important. Be clear about incentives that could be attached to a response.
- Why someone should take your survey.
- How long it really takes to complete: Test it!
- The number of questions in the survey.
- What happens to the survey data?
- Is the data anonymous? And if not, how is their data stored safely?
Make sure the survey aims are clear in the invitation, instructions and welcome. Keep participants updated with progress; share the collective results at the end; and keep in touch about how the survey is used.
Keep it short
Shorter surveys get more engagement. Respondents typically complete five closed questions per minute, or two open-ended questions per minute. Keeping surveys brief helps avoid “survey fatigue.”
Continue engaging anyone who hasn’t finished it
Ideally, every survey will have a progress bar that lets the respondent know how close they are to finishing. But people are busy or get interrupted and can’t always finish a survey they’ve started. Don’t let them slip away! Send a few gentle reminders by email to let them know they’re almost done, and how much you value their feedback. Space out the reminders by a few days, send them at different times to re-engage them.
Ask the right questions
Go back to the purpose of your survey, and only ask questions that are directly relevant to the purpose. If gender isn’t relevant, don’t ask! If location isn’t relevant, don’t ask! It’s tempting to collect extra information just because you can. But recipients will appreciate you sticking to the point.
Great design helps everyone
Good, well designed surveys and questions help both the respondents and those analysing data. Bad survey design will get you data; but the quality of that data becomes questionable.
Share the results if appropriate
A simple way to encourage people is to offer respondents the opportunity to see survey results. Or, if they are going to be made public, the opportunity to see them before everyone else.
Very often, concerns about survey data giving you a competitive advantage are assumed and unfounded. In fact, many companies – including the largest consulting companies in the world – retain advantage because they share the results of their surveys.
Incentives can motivate people to give feedback. The trick is to know what interests the audience.
Make the incentive relevant
Incentives don’t need to be of great monetary value. Just make sure the incentive is relevant to the audience and piques their interest enough to give their feedback.
Some things to keep in mind include:
- How your respondents will receive their incentive
- Who will be rewarded (e.g. everyone who completes the survey or a lottery to find the winner)?
One incentive is to share some of the results if appropriate
A simple way to encourage people is to offer respondents the opportunity to see survey results. Or, if they are going to be made public, the opportunity to see them before everyone else. Concerns losing a competitive advantage are often unfounded. Many companies – including the largest consulting companies in the world – create advantage because they share the results of their surveys.
Know (and stick to) a budget with survey incentives
Having a defined budget is fundamental to your success. Know your financial limits and stick to an incentive that stays within those parameters. The cost of providing an incentive can quickly add up, so carefully selecting the right one can protect both the budget and the quality of the responses.
It’s also critical that you keep your promise of your incentive, or you risk alienating your audience.
Decide how survey incentives will be allocated
When deciding who qualifies for an incentive, there are several options. For example, reward everyone, enter everyone into a lottery, possibly make a donation (e.g. to a charity) based on the completion rates. This decision can have a significant financial impact. Surveys with a guaranteed incentive for every respondent will include a nominal gift, whereas lottery style incentives often have a much higher value.
If you aren’t sure whether your message is clear enough, test it on a few people before you send it.
Survey incentives need to match your audience’s interests
Make sure that your incentive meets the needs (and desires) of the audience. Take the time to understand what interests and motivates the target group. Rewards do not necessarily need to be financial. A reward could include sharing the results.
Ease of delivery
One important thing to consider when choosing an incentive is ease of delivery. Online gift cards and coupons makes for easy delivery, and only require an email address or phone number. You can send them by email or SMS. Respondents get their gift immediately after completing the survey.
Decide when you will release the incentive
Decide when you will send the incentive to the respondent. While it seems logical that you should only send a reward after a survey has been completed, some studies suggest otherwise. Sometimes respondents feel obliged to complete the survey after receiving the incentive. Guilt can be a powerful thing!
Prevent duplicate and fake entries
If you’ve found the perfect survey incentive, there is a real chance that people will want to get it more than once! It also means that you might get people who aren’t truly interested in your product or service wanting the incentive too. So, consider strategies to disqualify people seeking to capitalise on the rewards.
Donate to charity
If your target participants include professionals, then offering to donate to charity on their behalf can be very effective. It can also be extremely effective if your target audience has a shared vision, cause, or mission. For example, if you need to survey the parents of children in your primary school, offering to donate a certain amount to a children’s hospital might be a great incentive. This is because the cause (helping sick children) is something that resonates with every parent.
Reminding people about the survey is really important. Everyone is busy, and non-critical requests fall to the bottom of the list unless you send a reminder.
In fact, when you don’t send reminders, you are likely to get a low response rate! Keep in mind, though, that the reverse can be true: Too many reminders can be detrimental. So, carefully crafting your communications is essential. How many survey reminders you send will depend on your survey goals and your audience. For an employee survey, organisations usually expect their staff to respond so more reminders, and urgent reminders, will be warranted.
Use feedback to engage and motivate your team.
Share reports instantly with Spark Chart
Spark Chart has features to share reports easily and instantly and eliminates the need to export results into other applications. This is a great way to present the results.
Report Shares can be customised for different audiences. A title, welcome text and key messages can be added to the Report Share and are visible when viewing on the web. And, reports can be broken down into sub reports.
When a Report Share is created a web link is generated. Report Share links can be made public or they can be protected with a password or PIN.