In the past, when I was asked to make an estimation for scripting a market research project, I often made the planning a little too optimistic. Why? Because I focused my estimation too much on the actual task at hand, which is scripting a survey. While instead, I should have been thinking about everything that surrounds those projects. So I quickly learned to add time for Project Management; project planning, administration, discussions with the client, etc. But still, my projects took longer than anticipated. The bottleneck turned out to be testing the questionnaire.
While I have great confidence in my programming skills and the quality of the instructions given, we are only human. Mistakes can be made, instructions can be misunderstood and clients can change their minds once they see the questionnaire “in action”. So, in my experience, you should expect at least 2 rounds of testing and processing feedback.
3 ways of testing your questionnaire
But since testing can easily take up a lot of time, I went on a quest in order to find ways to make this more efficient. For this, I identified that there are 3 ways the respondents fill out the questionnaire. In order to explain this, I’ll use the example of a brand awareness question with 10 (multiple choice) answer options; ten car brands, for example. Respondents can either:
- follow the ideal scenario, as proposed/expected by the client
- go for extremes. For example, they only select 1 item or all the items.
- cover all your bases with bulk testing, which can be achieved through automated testing
The ideal scenario
Respondents will be allowed to select as many of the 10 items as they want, but only a maximum of 3 items can be used in the follow-up questions. So your first test of the questionnaire would be to select exactly 3 items. (Don’t forget to test by browsing back and modify your selection; always a good test!)
Going to extremes
Although unlikely, the respondent might only select 1 brand, or all 10. Test with both scenarios, and make sure the routing of the questionnaire handles the input accordingly.
Selecting 2 or more items will trigger follow-up questions about the preference between these items, but with only 1 item selected, it means you should pre-fill and skip those questions. It saves the respondent time and frustration not having to answer those questions.
In the case of the 10 items, you need to bring it back to the maximum allowed of 3. For this task, check with the client if the brands should be selected randomly, or that there is another guideline to follow. You are able to select the brand that has the least amount of completes or make sure a specific brand is always in the selection of those 3 brands.
Once you feel confident you covered the bases with manual testing, it’s time to proceed to bulk testing. By bulk testing, you get a high volume of responses, can check the flow of your questionnaire, and do an initial analysis of the collected data. You could drum up a group of people to go through the survey for you, but why not save yourself precious time and let the computer do this for you?
With Nebu’s Dub Tester, the application that comes with Dub InterViewer you can quickly run a series of automated interviews. It’s a good way to test your logic and also to create a great number of records, which is useful when working with quotas. Dub Tester is part of the Nebu suite and can be installed locally.
With the paid web application SurveyTester.com, it’s also possible to run automated tests. It has some great extra features such as testing with different URLs and to get an overview of screenshots of your survey on various devices. My personal favorite is checking the pages & routing, to detect “dead ends” and “loops”. It also generates test cases, allowing you to redo the same interview. So the same answers will be given, which is great to check if the changes you made have the desired effect.
Please note that automated testing is a tool that helps, but will not detect all the issues. But by collecting a huge number of responses, you can see where your manual testing should focus on.
Nebu has hosted a great webinar about its collaboration with SurveyTester.
Feedback from your client
When you did your part of testing, the link can be shared with the client. But instead of them making movies/ screenshots and emailing back and forward, why not use the feedback module of Dub Interviewer? The tester can give precise feedback on the specific page it applies to. You can find the feedback in Dub ManageFrames, so you can tackle them one by one.
Need help with testing your project? Why not try out the tools mentioned or give me a call so I can help raise the quality of your questionnaire.