Surveys That Work

Surveys That Work

Author

Caroline Jarrett

Year
2021
image

Review

Despite doing a good job of introducing me to new terminology and other papers in the space, after 300 pages I felt there wasn’t much depth to this book. I was expecting an abundance of advice that I could run with, but the book didn’t quite deliver on that.

This book suffers from the standard non-fiction format trap, concepts are constantly summarised and duplicated.

I’m tempted to try Survey Methodology (Groves, Fowler et al 2009) as that seems to have provided the inspiration for the structure of this book.

You Might Also Like:

image

Key Takeaways

The 20% that gave me 80% of the value.

  • A survey is a process, of asking questions, that are answered by a sample, of a defined group of people, to get numbers that you can use to make decisions.
  • A survey is a quantitative research technique, when choosing a survey you’re choosing to end up with results a number. You can include the odd qualitative question. BUT if you’re goal is mainly qualitative then use a qualitative method.
  • Focus your survey process on the specific decisions you hope to make.
  • Minimise Total Survey Error (the sum of all the individual survey errors)
  • Much of the survey process is interconnected.
Why you want to ask
Who you want to ask
GOALS: The reason you’re doing it
SAMPLE: The list you sample from
QUESTIONS: The questions you ask
SAMPLE: The sample you ask
QUESTIONNAIRE / RESPONSES: The answers you get
FIELDWORK: The ones who respond
REPORTS: The answers you use
RESPONSES: The ones whose responses you use
  • Aim for light touch surveys. Each additional question adds to the burden of your survey
  • Surveys are are best used with or alongside other research methods.
  • For two surveys to be comparative you need to use the same method.
  • Surveys are stronger at answering ‘how many’ questions. ‘Why’ questions are often better answered by interviews or observational studies
  • Establish clear goals for the survey
    • Find your most crucial question by asking:If you could only get answers to one question, which would it be?
    • We need to ask ______. The question ______. So that we can decide ______.
  • Question if a survey is the best method? Could you solve with analytics or A/B testing?
  • Create a draft of your presentation early based on the results you expect to get from your survey
  • For best results…
    • Interview first so you know a little more about how people think
    • Do a cognitive interview to discovery if questions are working
    • Do a usability interview to test the questionnaire
    • Do a pilot test before the survey
  • Net Promoter Score is the most well known single question survey. It was originally designed to measure loyalty. It was chosen as it was determined to have the strongest correlation with loyalty
    • If brand is really important to the purchase. Then NPS is reasonable (e.g. premium cars)
  • Instead of asking about satisfaction:
    • Ask what one word best describes your emotional response?
    • Consider the Microsoft Product Reaction Cards (Benedek and Miner, 2002
  • Response rates vary by the way you deliver your questionnaire. Do a pilot test to help determine response rate.
  • Response depends on trust, effort and reward.
  • Monetary incentives immediately delivered beat other types of gifts.
  • Non-response error: when people who choose to answer your question differ from people who choose not to (creating bias)
  • Discover the burning issue to avoid indifference. What’s the overlap between what people are passionate about and what you want to know? Tap into the burning issue that people want to talk to you about.
  • A statistically significant response is one that is unlikely to be the result of chance. A result that is significant in practice is one that is meaningful to the world
  • There are downsides to asking everyone (in an attempt to avoid sample error)… Response rates are lower than if people are told they’re in a sample, it’s the most expensive way, you can’t iterate, you often don’t need or want to get everyone’s opinion anyhow
  • Find the people who you want to ask:
    • Narrow Down Method (start with a list that includes everyone)
    • Catch in the moment (e.g. feedback sheets on a training course)
    • Snowball up (e.g. send this to other football players)
  • Representativeness matters more than response or response rate.
  • What can possible go wrong with sampling
    • Coverage error: the list you sampled from was wrong
    • Sampling error: you choose a sample that isn’t representative
    • Non-response error: people who respond are different to those that don’t (in ways that matter)
  • The p-value is the probability that the data would be at least as extreme as those observed, if the null hypothesis were true.
  • Many scientists believe that statistical significance is being misused. See ‘Scientists rise up against statistical significance’. ATOM instead: Accept uncertainty. Be Thoughtful, Open, and Modest.
  • Good questions are easy to understand
    • People have to first perceive the question and turn it into something meaningful to them
    • Shorter questions are generally better
    • Use familiar words in familiar ways
    • Only ask people to focus on one thing at a time (don’t use double-barrelled questions)
  • There’s an approximate forgetting curve: therefore asks about a recent vivid experience
  • The context effect happens when a question changes its meaning according to its position in a questionnaire or conversation
  • Test your questions in cognitive interviews
    • Find a few people in your group
    • Ask the person to work through the questions and…
      • Read the question out loud
      • Explain it back in their own words
      • Think aloud to find some potential answers
      • Tell me about the choice of which answer to give
    • Do you notice any hesitation or confusion?
  • Use the simplest possible response format.
  • Use closed questions if you can.
    • Find out the true range of answers first
  • Open boxes can provide surprising answers (somebody might pay way more than you expect)
  • If you want people to give….
    • Exactly one answer → radio buttons
    • One or more from several choices → checkboxes
    • Not sure, or anything else → open box
  • Do not have required questions, make them all optional. If using a required question it needs to be valuable enough to lose some people who are trying to answer, getting some misleading answer from others
  • Do a usability test…
    • Find somebody willing to answer it
    • Watch them. Take notes about everything
    • Ask them for their views on the questionnaire at the end
    • Reflect on what to change if anything.
  • A Likert scale measures attitude
    • A Likert item is made up of a statement and a set of response points
      • Response points can have…
        • a five-point one from “Strongly Agree” to “Strongly Disagree”
        • a three-point one with “yes” “?” “no”
        • or a set of five longer statements that reflect opinions along a continuum
  • People wrongly refer to the Likert scale when they are referring to the response format (Strongly Agree to Strongly disagree).
  • Likert items often appear in grids (a.k.a. matrix questions)
    • Grids create a big spike in people abandoning a questionnaire
    • They can be harder to answer (understand, find, decide and respond)
  • Likert scales are always of related questions to answer a single research question relating to attitude. It should be possible to combine the responses to get an overall score for that attitude… otherwise you’re just listing questions in a grid format
  • Choose an appropriate reward. The best incentives are guaranteed and immediate.
  • Good invitations:
  • Build Trust by
    • Say who you are • Say why you’ve contacted this person specifically • Include your contact info to show a real person is behind it
    Increase perceived reward by
    • Explain the purpose • Explain why this person’s responses will help • If there is an incentive, offer it
    To help estimate the effort
    • Outline the topics of the survey • Say what the closing date is • Do not say how long it will take (unless you’ve tested it) • Say how many questions there are
  • Do a pilot test
    • Use about 10% of your sample
    • Start your data cleaning
    • Create draft deliverables
    • You’ll catch mistakes
  • Decide whose responses you will use
    • You need to decide what to do with missing data. Options:
      • Remove the person from the data set · Simple but loses data
      • Remove only problematic answer · Preserves data, can be confusing in reports
      • Impute the missing values (e.g. mode, median, random) · complex and hard to explain
      • Design a better survey next time
  • You can weight a survey (assign a multiplication factor to each response in proportion to its representativeness)
  • Get to know your numeric data
    • Check the min and max for responses that aren’t plausible
    • Mode is good for decision making (e.g. a good experience for the most frequent customer type)
    • Mean is sensitive to outliers (Jeff Bezos walking into a bar raises the average income a lot)
    • Variance is a measure of how spread out the values are compared to the mean.
      • The standard deviation is the square root of the variance → and it gets you back to original units!
    • Visualise your distributions and correlations
    • Look into outliers
    • See what the data looks like in different chart formats and pivot tables
  • Attributes of a good chart:
    • Honest
    • Easy to read
    • Correctly labelled
    • Has a clear message
    • Works for the people who read it
  • Avoid 3D charts. Avoid fancy charts.
  • Column charts are an alternative to bar charts and easier to label.
  • Remove visual clutter from charts.
  • For slides use the assertion/evidence format. (Garner, Alley et al., 2009). The title of the slide is the main point, made in a full sentence. It is supported by further detail or evidence.
  • Inverted pyramid is the journalistic style that starts with the most important message and then supports it with extra detail.
Breakdown of Total Survey Error diagram
image
  • The 7 steps to the survey process:
1
Goals
Establish your goals
Questions you need answers to
2
Sample
Decide who to ask and how many
People you will invite to answer
3
Questions
Test the questions
Questions people can answer
4
Questionnaire
Build the questionnaire
Questions people can interact with
5
Fieldwork
Run the survey from invitation to follow-up
People who respond
6
Responses
Clean and analyse the data
Answers
7
Reports
Present the results
Decisions
image

Deep Summary

Longer form notes, typically condensed, reworded and de-duplicated.

Chapter 0: Definitions

  • A survey is a process, of asking questions that are answered by a sample, of a defined group of people, to get numbers that you can use to make decisions.
  • It’s easy to ask questions, but harder to get them to answer.
  • A survey is a quantitative research technique, when choosing a survey you’re choosing to end up with results a number
    • You can include the odd qualitative question. BUT if you’re goal is mainly qualitative then use a qualitative method
  • A survey should be focused on the specific decisions that your organisation will make based on the results of the survey.
  • Total survey error focuses on reducing problems overall
    • We want results to be valid, and accurately measure what we claim
    • If repeated, you’d want to get the same result
    • People choose what surveys to answer and what not to answer
    • Ask too many questions, or too many irrelevant questions or hard to answer questions and people will drop out
    • Total survey error is the consequence of all the individual survey errors
  • Meet the survey octopus
    • The survey process sits between:
      • Why you want to ask?
      • Who you want to ask?
      • The number / final result
    • The octopus represents the choices you make in surveys
      • All of them are interconnected, if you make good decisions for each you’ll get good results
      GOALS: The reason you’re doing it
      SAMPLE: The list you sample from
      QUESTIONS: The questions you ask
      SAMPLE: The sample you ask
      QUESTIONNAIRE / RESPONSES: The answers you get
      FIELDWORK: The ones who respond
      REPORTS: The answers you use
      RESPONSES: The ones whose responses you use
  • Aim for light touch surveys
    • It used to be laborious to survey because you had to travel in person, so surveys were done infrequently and packed with questions
    • Each additional question adds to the burden of your survey. The harder the survey the fewer will respond. The bigger the survey the harder it is to analyse the responses too.
    • It’s now possible to do lots of tiny surveys. The ideal survey has a single light touch question
  • Other Lessons:
    • Surveys are are best used with other research methods. They help you establish a baseline and identify areas of further exploration
    • Keep your survey short, by focusing on one particular customer/behaviour you want to understand
    • Be careful with open ended responses. They take a lot of time to analyse. Instead, find out the most common answers, put them as check boxes and add an ‘other’ option with free text.
  • Four different types of surveys:
    • A descriptive survey obtains a number that describes something about group of people:
      • e.g. 90% of Product Managers haven’t read a book about surveys
    • A comparative survey (a.k.a a tracker) obtains a number that describes something about a defined group of people that will be compared (to the past or future number, obtained by the same method)
      • A longitudinal survey is a comparative survey but over a long period. ‘Long’ is contextual to what you’re measuring
      • For two surveys to be comparative you need to use the same method. Schuman and Presser in their 1981 paper showed changing the order of questions in a two question survey had a massive impact on the results.
    • A modelling survey asks a variety of questions on topics related to some outcome with the aim of discovering the factors that are associated with it
      • E.g. ask about many factors, and then see which variables are are correlated.
    • An exploratory survey gathers whatever information it can about a defined group of people.
      • Often as a springboard for further research. This is OK, but consider conducting some generative interviews or observational studies instead.
    • Be mindful about creating a survey that’s trying to do all 4 of these things.

Chapter 1: Goals Establish Your Goals for the Survey

  • Write down all your research questions (the topics that you want to find out about)
    • Write everything down, ignore duplicates, aim for variety
    • Take a break, give your subconscious a chance to work
    • Get plenty of suggestions whilst you’re establishing the goal of the survey
  • Challenge your question ideas:
    • What do you need to know?
    • Why do you need to know? (this time / right now)
    • What decision will you make based on the answers?
    • What number do you need to make the decision?
      • If you don’t need a numeric answer, then a survey might not be right
  • Choose the most crucial question
    • Find your most crucial question by asking:If you could only get answers to one question, which would it be?
    • The most crucial question is the one that makes the different, it provides essential data for decision making.
      • We need to ask ______. The question ______. So that we can decide ______.
    • Attack your most crucial question
      • Example: Do you like our magazine?
        • What do you mean by you? by like? by our? by magazine?
  • Check that a survey is the right thing to do
    • Surveys are stronger at answering ‘how many’ questions. ‘Why’ questions are often better answered by interviews or observational studies
    • Do you need a human to give the answer? Is there a more accurate way to know it or infer it?
    • Triangulate: Use qualitative and quantitative together to get a better understanding of the what and the why.
Why? Qualitative
How many? Quantitative
Observe
Usability Test Field Study
Analytics A/B tests
Ask
Interview
Survey
  • Create a draft of your presentation based on the results you expect to get from your survey
  • If you don’t know enough about the why then choose to start with observation and interviews
  • Think about what sort of number you need. You need to think about your statistical strategy before you collect the data not afterwards
    • Do you care about the mean? or the median? or the mode?
    • Is range important?
  • Determine the time you have and the help you need
    • When do you need the result?
    • How much time can you put into it?
    • Do you already have a survey tool? Do you know how to use it?
    • Who needs to be involved?
    • Who will get the results?
    • Who is involved in the decision making afterwards?
  • Interview first → survey later.
  • For best results…
    • Interview first so you know a little more about how people think
    • Do a cognitive interview to discovery if questions are working
    • Do a usability interview to test the questionnaire
    • Do a pilot test before the survey
  • Lack of validity happens when the questions you ask don’t match the reason why you’re doing to the survey.
  • Spotlight: The Net Promoter Score and Correlation
    • Net Promoter Score is the most well known single question survey. It was originally designed to measure loyalty. It was chosen as it was determined to have the strongest correlation with loyalty.
    • Correlation doesn’t mean causation
    • NPS uses correlation as ordinary prediction
    • Methodology:
      • Ask how likely would are you to recommend this product or service to a friend or colleague? Answers are on a scale of 0 to 10
        • Answer 0-6 and you are a detractors
        • Answer 7-8 and you are passive
        • Answer 9-10 and you are a promotor
        • NPS = % of promoters - % of detractors
      • If brand is really important to the purchase. Then NPS is reasonable (e.g. premium cars)
      • If you must use NPS then read Fred Reichheld’s book (the ultimate question)
  • Spotlight: Satisfaction
    • Satisfaction is hard to ask about, it’s a slippery subject. Satisfaction is all about comparisons. Satisfaction depends hugely on framing, satisfied compared to what? To your expectations, your needs, to excellence, to somebody else’s treatment?)
    • Bronze medalists are happier with the result than silver medalists (Medvec, Madey, and Gilovich, 1995)
    • Instead of asking about satisfaction:
      • Ask what one word best describes your emotional response?
      • Consider the Microsoft Product Reaction Cards (Benedek and Miner, 2002)
    • The term satisfaction is a convenient shorthand for a complex blend of feelings and thoughts about expectations, experience, and outcomes. Survey methodologists call these things attitudes.
    • Stakeholders often want to keep track of a summary score for a complex attitude
      • E.g. Are staff happy working here? Did people like this event?
      • When you create a series of statements about an attitude and you combine the answers in a single score, you are building a “Likert scale.”

Chapter 2: Sample: Find People Who Will Answer

  • Some of the people you ask will decide not to answer. The response rate is the ration of the number of people who answer vs the number of people who ask.
  • Response rates vary by the way you deliver your questionnaire
    • National Statistics Surveys: 60-95%
    • Mail surveys: 30-60%
    • Well designed email survey: 5-15%
    • Prominent website survey: 0.001% -0.5%
    • Banner invitation at top of website: 0.01%
  • Do a pilot test to help determine response rate
  • Response depends on trust, effort and reward.
    • Including a $1 bill in a mail survey results in more surveys than offering $50 for completed surveys .
    • Analysed survey delivery and incentive methods (Singer and Ye, 2013):
      • Monetary incentives beat other types of gifts
      • incentives in advance beat promised incentives later
      • Prize draws have little effect on response rates
    • Trust, perceived reward and perceived effort need to be balanced
      • Offering an incentive that’s too high might break trust
    • Trust is also about what people think you might do with the answers
    • Response depends on perceived effort:
    • f people are passionate about a cause (like gender pay gap) they’ll continue
    • Non-response error: when people who choose to answer your question differ from people who choose not to (creating bias)
    • The zone of response is the number of people who answer compared to the possible answers
      1. image
      2. This seems to be the problem with hotel reviews online. The zone of indifference → lower responses from people who care less about your topic. Make sure you don’t average out a large number of extreme positive and extreme negative reviews!
    • Discover the burning issue to avoid indifference. What’s the overlap between what people are passionate about and what you want to know? Tap into the burning issue that people want to talk to you about
  • Decide how many answers you need
    • Responses you need = response rate x how many people to ask
    • Face validity is defined as making a choice that looks sensible to your stakeholders
    • A statistically significant response is one that is unlikely to be the result of chance.
    • A result that is significant in practice is one that is meaningful to the world
    • You can use a sample size calculator to determine how many people to ask. It’ll ask for the confidence level you require, population size and margin of error
    • Start with the smallest number that you think will help stakeholders make the decision, you can extend later if needed.
  • The downsides of asking everyone to avoid sample error:
    • Response rates are lower than if people are told they’re in a sample
    • It’s the most expensive way
    • You can’t iterate because you’ve asked everyone
    • Note: You often don’t need or want to get everyone’s opinion anyhow
  • Aim to get just enough of the right people
  • Find the people who you want to ask
    • Three methods:
      • Narrow Down Method (start with a list that includes everyone)
        • Choose an appropriate sample method (e.g. simple random)
      • Catch in the moment (e.g. feedback sheets on a training course)
        • e.g. people who visit our website between two dates
      • Snowball up (e.g. send this to anyone you know)
        • use what you learn to find more people and send to more people
    • You want a representative response, that accurately reflects the views and characteristics of the defined group
    • Coverage error happens when the list you sample from includes people from outside the group you want to ask
    • Starting from a list of customers has challenges:
      • They could… have lapsed, be staff, be test users, be dead or unwell
      • The list could exclude brand new customers, or people who use but don’t purchase
    • To reduce coverage error: be precise about the group, double check your list, ask screening questions
  • The rights response is better than a big response
    • Find questions (not too many) that help you work out if somebody is in your defined group.
      • Compare the responses you get to known attributes about your group
      • Increase the sample size from a small base and keep checking
    • Representativeness matters more than response or response rate
  • What can possible go wrong with sampling
    • Coverage error: the list you sampled from was wrong
    • Sampling error: you choose a sample that isn’t representative
    • Non-response error: people who respond are different to those that don’t (in ways that matter)
  • Spotlight: Statistical Significance
    • A result that is statistically significance is unlikely to be the result of chance
    • An effect is something happening that is not the result of chance
    • Usually we classify a p value of 0.05 as statistically significant
      • The p-value is the probability that the data would be at least as extreme as those observed, if the null hypothesis were true.
    • Statistical power is the probability the test will correctly identify and effect (Ellis,2010)
    • Many scientists believe that statistical significance is being misused:
      • See ‘Scientists rise up against statistical significance’
      • Wasserstein, Schirm et al. proposed ATOM instead
        • Accept uncertainty. Be Thoughtful, Open, and Modest.
    • When talking about uncertainty talk in confidence intervals
      • A confidence interval is a range that estimates the true population value for a statistic
      • You can calculate a confidence interval from the average, the standard deviation, sample size and the confidence level
      • Confidence interval based on 95% level
      • Mean
        Std. Dev.S
        Sample Size
        Confidence Lev.
        Confidence interval
        42
        9.8
        500
        95%
        41.1 - 42.9
        42
        9.8
        50
        95%
        39.3 - 44.7
    • The margin of error is the level of precision that you need in your result
      • You can use that to work out the sample size you need to achieve that level of confidence

Chapter 3: Questions: Write and Test the Questions

  • Understand the four steps to answer a question
    1. Understand the question
    2. Find an answer (e.g. think back over recent activity)
    3. Decide on an answer (e.g think about what to include, discount or withhold)
    4. Respond with an answer
  • Good questions are easy to understand
    • People have to first perceive the question and turn it into something meaningful to them
    • Shorter questions are generally better
    • Use familiar words in familiar ways
    • Only ask people to focus on one thing at a time (don’t use double-barreled questions)
  • Good questions ask for answers that are easy to find:
    • Types of question:
    • Retrieving a memory
      Answer is in your head
      Retrieving information from somewhere else
      What’s you IBAN number? What was your mum’s favourite subject in school?
      An answer you have to create at the time
      How close do you feel to the Guardian newspaper?
    • There’s an approximate forgetting curve:
      • Unremarkable details aren’t recalled and quickly forgot
      • Noticeable things are some what recalled and forgotten over a longer time period
      • Major life events are recalled to a higher level of recall over a longer period of time
    • Therefore asks about a recent vivid experience
    • Recall is harder than recognition.
    • A recall question… please name three chocolate brands…
  • Don’t ask people what they’re going to do in future, humans are bad at that.
  • People what they are willing to reveal to whom. Some questions are acceptable in some contexts but not another.
  • The context effect happens when a question changes its meaning according to its position in a questionnaire or conversation
  • Good questions make it easy to respond
  • Test your questions in cognitive interviews
    • Find a few people in your group
    • Ask the person to work through the questions and…
      • Read the question out loud
      • Explain it back in their own words
      • Think aloud to find some potential answers
      • Tell me about the choice of which answer to give
    • Do you notice any hesitation or confusion?
  • Measurement error is the difference between the true value of the answer and the answer you get
  • Spotlight: Privacy
    • Think about privacy at the start
    • A data breach is a big embarrassment and can result in a big fine
    • You should do a privacy impact assessment (PIA) for each survey, its a way to surface and discuss risks
    • Steps to include an a privacy impact assessment
      1. Set up your document (using a template)
      2. Describe:
        • The information you’re gathering
        • Why you’re collecting it
        • What you’re using it for
        • Who you’re collecting it from
        • Where you will store it
        • Who will have access to it
        • How long you will keep it
        • How you will aggregate it
        • What you will do with your survey data at the end
      3. Describe the information flow: User → you → survey tool → partners → third partied
      4. Who’s involved?
        • Who has access to the data?
        • Who is in charge?
        • Who is responsible?
        • Whose job is it to deal with user concerns?
        • Whose job is it if something goes wrong?
        • Who is keeping an eye on things, such as who is accessing the data?
        • If a data protection regulator wants to raise a concern, who is the point of contact?
      5. Identify privacy and data protection risks …
        • What would happen if your dataset was: Misused? Compromised? Breached? Aggregated with other information?
      6. Identify and list what you’re doing to mitigate these risks and prevent them, as much as you can, from ever happening.
      7. Record everything you’ve done in steps 1 through 5
      8. Amend and update the PIA as necessary. Use it in your project postmortem
      9. Rinse and repeat
  • Spotlight: Questions to ask when choosing survey tools
    • Probe them on privacy
    • Prove them on accessibility (screen magnifier, screen reader, speech recognition, readability, other)
    • Ask if you can download the data?
  • Spotlight: Choose your mode: web, paper, or something else?
Mode
Max Q
Cost
Pros
Cons
Ambient
1-3
$$$$
Timely answers
Response rate poor. Must be short
Email
10 closed 1 open
$
Quick, cheap and easy. Attribution. Can be longer.
Users suffer from email overload Spam filters
Paper
10 mins
$$$$
Easy to enclose incentive. May be cost-effective. Can be longer
Unless personalised seen as junk. Seen as high cost/old fashioned by stakeholders
Face to face
15 mins
$$$$
Decent response rate. Carefully choose participants. Can be longer.
Hard to find your group. High cost. Needs skilled interviewers.
Web invitation to web questionnaire
1
$
Quick, cheap and easy. Attribution.
Low response rate. Must be short
Kiosk
1-5
$$
  • Sadly the more costly options are better
  • Can you think of an inventive way to get a high response?
image

Chapter 4: Questionnaire: Build and Test the Questionnaire

  • Test your questionnaire to check you don’t have technical errors (on a variety of browsers and devices)
    • Get somebody who didn’t design it to test it
  • Good questions are easy to respond to
    • Use the simplest possible response format.
      • Use closed questions if you can (radio or check boxes)
        • Don’t use drop-downs
          • Offer up all the options as a set of radio buttons
          • Offer up a box and clean the data
          • Reduce options
          • Consider autocomplete (accessible)
    • Closed questions can anchor people on a range and change their answer (as they assume the range is representative and think of their position as relative)
      • Find the true range of answers
      • Use an open box
  • Open boxes can provide surprising answers (somebody might pay way more than you expect)
  • Open boxes can save you from category error (when the categories you’ve offered don’t apply). Category errors force people to enter incorrectly, skip the question or drop out.
    • Get the size of the box right. Bigger boxes solicit bigger responses.
  • Don’t assume anything. E.g. what did you like about your visit?
  • If you want people to give….
    • Exactly one answer → radio buttons
    • One or more from several choices → checkboxes
    • Not sure, or anything else → open box
  • Avoid fancy interaction devices (sliders, maps, drag and drop)
  • Choose your images carefully. Images sway answers.
  • Consider the order of your questions (especially for longer surveys)
    • Start with easy unintrusive topics
    • Then lead with interesting questions (e.g. those that relate to the burning issue)
    • Finish the questionnaire with a thank you
  • Skip an introduction page.
  • Do not have required questions, make them all optional always.
    • If using a required question it needs to be valuable enough to lose some people who are trying to answer, getting some misleading answer from others
  • Do a usability test…
    • Find somebody willing to answer it
    • Watch them. Take notes about everything
    • Ask them for their views on the questionnaire at the end
    • Reflect on what to change if anything.
  • Take screenshots of your final questionnaire (it’ll help people who analyse the data later - you want to know exactly what people saw
  • Spotlight: on a scale from 1 to 5 (Likert and ranking scales)
    • A Likert scale measures attitude
    • Likert wrote a famous paper… Technique for the Measurement of Attitudes.” (Likert, 1932)
    • Anatomy of a Likert scale:
      • Likert scale is a set of Likert items
      • A Likert item is made up of a statement and a set of response points
      • Response points can have…
      • a five-point one from “Strongly Agree” to “Strongly Disagree”
      • a three-point one with “yes” “?” “no”
      • or a set of five longer statements that reflect opinions along a continuum
    • People wrongly refer to the Likert scale when they are referring to the response format (Strongly Agree to Strongly disagree).
  • Likert items often appear in grids (a.k.a. matrix questions)
    • Grids create a big spike in people abandoning a questionnaire
    • They can be harder to answer (understand, find, decide and respond)
  • Likert scales are always of related questions to answer a single research question relating to attitude. It should be possible to combine the responses to get an overall score for that attitude.
    • Otherwise you’re just showing questions in a grid format
Steps to building a Likert scale
  • Start with candidate questions
  • Choose a single topic for your scale
  • Split up double-barrelled statements
  • Check that the statements are opinions
  • Test the statements use familiar words in familiar ways
  • Include no more than 10 statements
  • Check that your statements are positive
  • Choose the number of response points
    • Choose an odd number 5, 3 or 7 points (so you have a neutral midpoint)
    • People don’t care about the number of response points.
      • See… (Phan, Blome et al., 2011)
  • Decide on your method of scoring for your Likert items
    • You can just use 1 to 5 if you have 5 items
    • This is why it was important to ensure everything was framed in the positive
    • It doesn’t matter what you choose, just pick one and stick to it
  • Decide how to calculate the overall score (based on the item score)
    • You can just add them all up
  • Think about whether a rating response is appropriate
    • Consider turning some or all into direct questions to:
      • reduce cognitive load
      • give you more flexibility on the question
  • Test your scale with people from your group
  • Run the statistics
  • Run the test again. Reproducibility = Test/retest reliability.
    • You can test for factor analysis or Cronbach’s alpha.
  • Avoid ranking questions.

Chapter 5: Fieldwork, Get People to Respond

  • Decide on your invitation, thank you and follow up
  • Choose an appropriate reward. The best incentives are guaranteed and immediate.
    • Incentives only work if the perceived reward is inline with the perceived effort
    • Make sure you leave an open question, if people are giving their opinion for free
    • If people answer because they’re interested in your topic, then sometimes a promise of receiving the repot is enough (or even a link to last years report)
  • Anonymity: you will not be identified as the individual who gave those responses
  • Confidentiality: your responses will only be seen by whoever you agree may see them
  • Some people will put their details in, even if it’s an anonymous form:
    • You might need to redact personal details
  • Reminders can increase response rate. Make the interval sensible.Try sending in another modality.
  • In the invitation to complete the survey
Build Trust by
• Say who you are • Say why you’ve contacted this person specifically • Include your contact info to show a real person is behind it
Increase perceived reward by
• Explain the purpose • Explain why this person’s responses will help • If there is an incentive, offer it
To help estimate the effort
• Outline the topics of the survey • Say what the closing date is • Do not say how long it will take (unless you’ve tested it) • Say how many questions there are
  • Example invitation from Trello…
  • Hey Caroline,
    Thanks for using Trello. We'd love to hear how you feel about it!
    We have a few questions for you in this quick online survey:
    <<<<Button>>>>
    Your honest feedback means the world to us.
    We read every response so we can make Trello better for you and others.
    
    Thank you!
    
    The team at Trello
  • Write a thank you page for politeness
  • Do a pilot test
    • Use about 10% of your sample
    • Start your data cleaning
    • Create draft deliverables
    • You’ll catch mistakes
  • Launch your fieldwork and look after it
    • Do a complete analysis after the first 100 responses.
    • If you can make the decision at that point, then stop
    • Do follow up quickly

Chapter 6: Responses Turn Data into Answers

  • Clean your data
    • Consider what you need to redact or delete
      • Repeated or implausible responses
      • Do you need to redact any personal data?
      • Curse words or racist language?
      • Remove any test data
  • Setup a log page to keep notes on what you do during your data cleaning and analysis
  • Decide whose responses you will use
    • You need to decide what to do with missing data. Options:
      • Remove the person from the data set · Simple but loses data
      • Remove only problematic answer · Preserves data, can be confusing in reports
      • Impute the missing values (e.g. mode, median, random) · complex and hard to explain
      • Design a better survey next time
  • You can weight a survey (assign a multiplication factor to each response in proportion to its representativeness)
    • Adjustment error happens when you make less-than-perfect weight adjustments
  • Get to know your numeric data
    • Check the min and max for responses that aren’t plausible
    • Mode is good for decision making (e.g. a good experience for the most frequent customer type)
    • Mean is sensitive to outliers (Jeff Bezos walking into a bar raises the average income a lot)
    • Variance is a measure of how spread out the values are compared to the mean.
      • The standard deviation is the square root of the variance → and it gets you back to original units!
    • Visualise your distributions and correlations
    • Look into outliers
    • See what the data looks like in different chart formats and pivot tables
  • If you have too many ‘other’ answers you might have a problem with the question
  • ‘Coding’ is a form of data cleansing (e.g. correcting spelling mistakes and creating new data categories)
  • Types of coding:
    • Light touch: Send everything
    • Adjectival: Choose an adjective that sums up the response
    • Descriptive: Summarise to a short phrase
    • Task area: Allocate to certain department
    • Invivo: Choose small chunks that are representative of the response
    • Provisional: Setup a list of codes beforehand
  • Aim for inter-rater reliability (when every member of a coding team codes each response in the same way)
  • Attributes of a good chart:
    • Honest
    • Easy to read
    • Correctly labelled
    • Has a clear message
    • Works for the people who read it
  • Avoid 3D charts. Avoid fancy charts.
  • Column charts are an alternative to bar charts and easier to label.
  • Remove visual clutter from charts.

Chapter 7: Reports: Show the Results to Decision Makers

  • Think about what you learned, numerically
    • A descriptive statistic is a statement about the data in the survey responses
    • An inferential statistic is a statement about the group based on the data from the survey responses.
    • Consider the strength of views as well as the direction (a bimodal distribution can average out)
    • Consider splitting the report if there are two different views
    • Avoid cherry picking (picking just a few comments instead of the overall response)
  • Decide what news to deliver and when
    • Don’t surprise people with bad news
    • Get your analysis done in time for the decision making moment
    • It’s OK to have gaps (e.g. we need to do more work / more research)
  • Decide what format to use for delivery
    • This weeks results
    • A poster
    • An infographic
For slides use the assertion/evidence format. (Garner, Alley et al., 2009). The title of the slide is the main point, made in a full sentence. It is supported by further detail or evidence.
image
  • Choose inverted pyramid for most presentations
    • Inverted pyramid is the journalistic style that starts with the most important message and then supports it with extra detail.
    • Methodology first is the traditional style of scientific papers
    • Presentation zen (Garr Reynolds 2012) a series of compelling images accompanied by a few words to provide an engaging backdrop to a powerful speech.
  • The best insights come from using surveys alongside other methods
image
  • Total survey error diagram (Groves, Fowler et al 2009)
image

Chapter 8: The Least You Can Do

  • The 7 steps to the survey process. They are all interlinked.
1
Goals
Establish your goals
Questions you need answers to
2
Sample
Decide who to ask and how many
People you will invite to answer
3
Questions
Test the questions
Questions people can answer
4
Questionnaire
Build the questionnaire
Questions people can interact with
5
Fieldwork
Run the survey from invitation to follow-up
People who respond
6
Responses
Clean and analyse the data
Answers
7
Reports
Present the results
Decisions
  • The checklist for everything
Question
Definition of Done
1
Who do you want to ask?
A precisely defined group of people
What do you want to ask them?
A single Most Crucial question
What decision will you make?
· A method to score or count your Most Crucial Question · Draft of presentation / output for results sharing
2
How did you find your sample?
· If narrowing down… you have a list and have investigated its quality · If choosing in the moment… think about how to intercept, test the idea in your pilot · If snowballing up… decide on a method for starting the snowball
How many people do you need to respond?
Agreed a number of responses to aim for
What response rate do you expect to get?
Expected response rate from prior surveys or a pilot
Have you decided on a representatives question?
Questions that allow comparisons with other data
Do you know the burning issues?
Completed interviews with your group, identified issues, triangulated with results from other surveys
3
Do your questions use words in familiar ways?
Completed cognitive interviews with your group
Do people have answers for your questions?
Iterated based on cognitive interview feedback
Do they feel comfortable with revealing their answers to you?
Checked issues resolved in second set of cognitive interviews
4
Have you got your privacy policy sorted out?
PIA is written, checked against existing privacy policies. Revised privacy notices are ready to go
Have you chosen your mode and questionnaire tool?
Created a draft questionnaire
Does your questionnaire work correctly from invitation to thank you page?
One person has tested it on a variety of browsers and assistive technologies
Can people from your defined group use your questionnaire?
Usability testing completed with 3 people, amended issues and tested again.
5
Are you offering follow-up?
Decided approach and cross-checked with PIA
Have you run your pilot test?
Pilot test: · Run from start to finish · Analysed responses · Draft presentation is created
6
Have you backed up data and created a research log?
This is a continuous process
Have you cleaned data? Redacting personal info / checking ranges
Pass on follow-on questions and redact personal data
Did you have to exclude any responses and why?
All exclusions are documented in research log
Did you choose to do any weighting and why?
Weights are applied and details in research log
Did you check that your responses were representative?
Checked and look acceptably in line with results expected
Have you paid attention to all open answers?
Read and thought about every answer · Send answers to who can take action · Smooth and group answers for numerical analysis · Create categories of answers
Did you find anything that surprised you or was unexpected?
You should find something surprising in your responses
7
What did you find out compared to your goals?
Reflect on what you learned, vs what you hoped to learn
Did you use any descriptive statistics?
Use some descriptive statistics to help people read the report to compare and use the numbers
How did you communicate results?
Used a sensible or multiple methods appropriate to your stakeholders
Did you triangulate?
Compared what you learnt from this survey with other surveys / conversations