Testing with Humans

Testing with Humans

Author

Ron Chernow

Year
2018
image

Review

This book serves a healthy reminder that most Product Teams spend too much time building and shipping features. Exploratory testing gives us a chance to validate our assumptions before we commit to building. This book only scratches the surface of exploratory testing, but it’s a good introduction, and I recommend it to people getting into product management. I can recommend ‘Testing Business Ideas’ for a more in depth look at some of the techniques mentioned in this book.

You Might Also Like…

image

Key Takeaways

The 20% that gave me 80% of the value.

  • There are two types of experimentation in Product Management
    1. Optimisation → making small changes to a live product. The results are measured and the better performing variant is preferred.
    2. Exploratory → validating key assumptions before building a solution.
  • You can validate assumptions faster through experimentation than you can building software. Experiments help you move fast, increase the odds of success and make the most of limited resources. Talking to potential customers is also a good approach, see Talking with Humans for more about that.
  • The high-level experiment process:
    1. Identify your key risks and assumptions - in your business model, product or feature
      • There’s a number of frameworks you can use to tease out assumptions:
        • Business Model Canvas by Alex Osterwalder
        • Assumptions exercise from Talking to Humans
        • Lean canvas by Ash Maurya
        • Assumption Mapping by David Bland
    2. Prioritise the riskiest assumptions - (the hypothesis to test)
      • Use a 2x2 of Impact and uncertainty
    3. Identify and sequence experiments
      • You can ideate on how to test an assumption, just like you can ideate around features. Involve the team, generate as many ideas as you can - then pick the best
    4. Incorporate the results into your decision making process
      • Don’t forget this one. It’s amazing how many teams experiment and measure - but the results don’t inform the future direction.
      • Visualise your learnings from each experiment against each assumption - so you can see everything on a page.
  • There’s a correlation between how much effort an experiment is - and how believable the results are. From a paper prototype to a live product or business. You can plot them on a ‘truth curve’
  • The Five Traits of Good Experiments
    1. They are structured and planned. Use a template.
    2. They are focused. Test a core hypothesis, don’t try to do too much at once.
    3. They are believable. You can trust what you’re learning.
    4. They are flexible. Remain open to making small improvements as you go.
    5. They are compact. You can run them in an efficient amount of time.
  • The Experiment Template
    • What hypotheses do we want to prove / disprove?
    • For each hypothesis, what quantifiable result indicates success? (Pass/Fail metrics)
    • Who are the target participants of this experiment
    • How many participants do we need?
    • How are we going to get them?
    • How do we run the experiment?
    • How long does the experiment run for?
    • Are there other qualitative things to learn during this experiment?
  • Always be asking: How can we learn just as much with half the time and effort
  • You can build a culture of experimentation in either a Top-Down or Bottom-Up
    • Target execs that realise that success rate of initiatives is too low requires
    • OR start a grass roots movement by experimenting where you can, and publicising the results
image

Deep Summary

Longer form notes, typically condensed, reworded and de-duplicated.

There are two types of experimentation in Product Management (this book focuses on the latter).

  1. Optimisation is about making small changes to a live product. The results are measured and the better performing variant is preferred.
  2. Exploratory is about validating key assumptions before building a solution.

Why Experiment?

Experimentation is valuable because it’s often a faster way to gain confidence in your idea than building and shipping software. Experiments help you move fast, increase the odds of success and make the most of limited resources. Talking to potential customers is also a good approach, see Talking with Humans for more about that.

You can de-risk a new feature, product or service. If you’ve made some big assumptions, test them before building your product. If you have the right mindset and are open to learning, insights from your experiments will help you refine your idea and improve your chance of success.

We run experiments in order to make better decisions. They gather crucial information that helps us create better strategies and take smarter actions in less time and with less cost. Experiments save time and cost. “Measure twice, cut once”. They are rarely a waste of time, there’s always something to learn about your potential customers.

Experiment Process

1) Identify the key risks and assumptions in your business model/product/feature
  • Behind every product or business model there’s a stack of assumptions. Some can be classed as facts but most are educated guesses
  • To shake out assumptions you can use a number of frameworks (but most will fill like overkill if you’re looking into a single feature)
    • Business Model Canvas by Alex Osterwalder
    • Assumptions exercise from Talking to Humans
    • Lean canvas by Ash Maurya
    • Assumption Mapping by David Bland
  • Think about the key questions: Who is this for? What problem or need are we solving for them? How will we solve it? How will we acquire and retain our customers? How will we create value for our company? How could it go wrong?
You can do a lightweight financial model to expose unknowns and pressure points
  • Customers acquisition. Cost and number by channel.
  • Customer activation. Basket size.
  • Customer retention.
  • Marginal costs.
  • Fixed costs.
  • Investments required.
2) Prioritise the riskiest assumptions - the hypotheses to test
  • Prioritise assumptions based on uncertainty and impact
  • Low Uncertainty
    High Uncertainty
    High Impact
    Test These
    Low Impact
3) Identify and sequence experiments
Generate experiment ideas for key assumptions
  • Select a risky assumption. Convene your team. Make sure everyone understands it.
  • Start a rapid sketching session with the team (a.k.a crazy 8’s/design studio/charrette)
    • Provide prompts (vary duration, list existing tools that could be adapted etc)
    • Give everyone a few minutes to sketch experiment ideas
      • There are almost always creative ways to test those risky assumptions faster and sooner than most people think.
    • Get them to present back to the group (clarifying questions only - no judgement)
    • Get them to vote on favourites
    • Split into pairs to refine and define top-ranked experiments using the template
    • Share back to the group. Now allow critique and refine together

“Don’t build too much — that is always failure mode. Teams get so invested in what they are making for the experiment that they lose sight that their creation is just to learn.”

  • Think about time to insight (or effort). You can’t test everything - at most companies you’ll feel pressure to not test anything at all
  • Knowing your options is a good thing, but you can’t test a menu. You need to test something specific.
  • Sequence your experiments and assumptions. Start with risky, high impact assumptions. Show the experiments that you plan to do.
  • As you think about prioritisation, you’ll usually want to start with quicker and easier ideas. Sadly, there’s a correlation between experiment effort and believability...
  • ...Illustrated in the truth curve.
    image
4) Incorporate the results into your decision making process
After experimentation one of these 4 should be true...
  1. You feel you still need more data
  2. You are ready to move forward with confidence
  3. You kill the initiative entirely
  4. You change your hypothesis based on the data (new experiments)
  • Make decisions in a disciplined way decision making process
  • Create a dashboard to visualise your learnings:
  • Assumption
    Status
    Notes
    Assumption 1
    + results
    Some notes here
    Assumption 2
    - results
    Some notes here again
Consider a weekly decision meeting
  • It’ll hold you accountable to fast and sharp execution.
  • Review what you hoped to learn
  • What you actually learned
  • Make conscious decisions on the next experiments and the idea as a whole.
  • Discuss what you should do next week?
Results from experiments are rarely clear cut. It requires a judgement call to interpret them
  • Don’t get hung up on statistical significance - unless your operating at scale, it’s too much of a burden. You’re not testing immutable laws of nature, you’re trying to see into the future
  • Every experiment has a weakness
  • You’ll worry about false positives and false negatives. The members of your team will interpret the results differently
  • Gather good data on important things
    • Choose the big risks
    • Be disciplined about designing and executing your experiments
    • You will get much more believable and actionable data
    • Part of running a disciplined experiment is keeping the incoming data organised
    • Keep a running consolidated source of key results.
    Know when to be skeptical of results
    • Higher effort/fidelity experiments produce more believable information
    • The lower fidelity the more judgement required
    Try to avoid your biases
    • Being irrationally optimistic is a gift and a curse.
    • Anchoring: Don’t anchor on the first bit of information you receive
    • Confirmation Bias: Don’t look for data that reinforces your point of view
    • Bandwagon effect: jumping to a conclusion because its popular
    • Sunk cost fallacy: our tendency to resist changing or shutting down an initiative
    Pay attention to outliers - they might teach you something truly interesting
“If we have data, let’s look at data. If all we have are opinions, let’s go with mine.” Jim Barksdale

The Five Traits of Good Experiments

  1. They are structured and planned. Use a template.
  2. They are focused. Test a core hypothesis, don’t try to do too much at once.
  3. They are believable. You can trust what you’re learning.
  4. They are flexible. Remain open to making small improvements as you go.
  5. They are compact. You can run them in an efficient amount of time.

The Anatomy of an Experiment

A well-run experiment requires discipline. Casual chaotic experiments lead to chaotic data which is hard to use in decision making. Use a template or checklist to maintain standards.

Experiment Template

1) What hypotheses do we want to prove / disprove?
  • Test a single hypothesis at a time. Test up to three if your experiment is high effort/fidelity
2) For each hypothesis, what quantifiable result indicates success? (Pass/Fail metrics)
  • Set ahead of time. Ground in research if you can
  • Don’t worry about statistical significance - unless already operating at scale
  • When in doubt, set a high bar.
3) Who are the target participants of this experiment?
  • You need to target the right people if you’re going to believe the results
  • Restrict edge-cases or certain customer segments to speed things up
4) How many participants do we need?
  • For enterprise 10-15, for consumer 100s or 1000s
  • For usability testing <10 is usually OK (or test until bored)
  • Tradeoff between believability and effort
  • Break into 2-3 groups, so you can adapt the experiment as you go
5) How are we going to get them?
  • Learn how to acquire through guessing and trial and error. There are many channels: Emails, cold calls, networks, conferences, meet-ups, intercepting, lunch breaks, online adds, email lists, embed into an existing product
6) How do we run the experiment?
  • Document the structure and execution steps, ensure roles and responsibilities are clear
7) How long does the experiment run for?
8) Are there other qualitative things to learn during this experiment?
  • Primary goal is to test a hypothesis, but make the most of the time you have with customers. What ride-along questions could provide valuable insights.

Always be asking: How can we learn just as much with half the time and effort

In a rush? Use a single sentence template. For [customer segment], we believe that [outcome] will happen when we run [experiment description]

10 Experiment Tips

  1. Test the most important risks and assumptions first.
  2. Balance testing the product with testing the business model.
  3. Get creative. Theres always another way to test.
  4. Sloppy experiments lead to sloppy results. Plan.
  5. An Experiment Is Not Throwing Things Against the Wall. The best experiments have structure rather than chaos.

  6. Set success criteria ahead of time.
  7. Always be looking learn more with half the effort / time
  8. For larger experiments do a trial run.
  9. Combine with speaking to customers at the same time (customer research)
  10. Combine evidence and judgement to make smart decisions
  11. Choose experiment types that give believable results in the shortest time.

Experiment Archetypes

Testing Demand

Landing Page: create a simple web page that expresses your value proposition and allows people to express their interest with some sort of call to action.
  • CTA could be submitting an email, filling out a form, or even entering a credit card number
  • Allows testing different price points or product bundles
Advertising: advertising your value proposition to a relevant audience to see whether people respond.
  • Boil your value proposition down to something that can be presented to a targeted audience to see if they convert
  • A/B test variations on your value proposition, and explore different channels
  • Measure ad conversion and the landing page conversation
  • Use Google, Facebook, Instagram, Youtube or Craigslist
Promotional Material: produce online or offline promotional material, to test reactions or generate demand.
  • Get feedback on your value proposition, or generate demand
  • E.g. Dropbox created a short demo video that explained their proposed solution and asked people to sign up for a waiting list. Driving waitlist from 5k to 75k
Pre-Selling or Crowd Funding.
  • An idea probably as old as mankind, pre-selling is simply where you try to book orders before you actually have the product

Testing Products/ Features

Paper Testing: mock up an example of an application UI or report and put it in front of potential customers
  • Primarily for software products, such as mobile and web applications, and information products such as data, analysis, and media
Button to nowhere: Dangle a feature in front of users before you have actually built it
Product Prototype: a working version of your product or experience built for learning and fast iteration, rather than for robustness or scale.
  • A physical product, might be a hand-made or 3D-printed version
  • In software, this might be a working, even partial, version of a feature hacked together with code, a prototyping tool, a form builder, etc.
Wizard of Oz: the customer thinks they are interfacing with a real product (or feature), but where you are providing the service in a manual way, hidden behind the scenes
Concierge: like Wizard of Oz but you don’t conceal you are acting manually
  • Manually, and overtly, act as the product you eventually want to build
  • E.g. Replacing Machine Learning with a person
Pilots: early version into the hand of customers (at small scale, for a finite period)
Usability checking if someone can effectively use a product without issues.
  • Task Completion is as simple as seeing if someone can complete a task without help
  • Noting where they struggle and have difficulty

How to build a culture of experimentation

Top-Down Culture Change
  • Executives realise that success rate of initiatives is too low
  • They encourage experimentation down the organisation
  • Execution teams normally receive this well.
  • Middle management resist because it takes away their decision making power
  • Training, patience, building from small wins to big ones, and consistent pressure from the CEO, you can break through corporate resistance to change
Bottom-Up Culture Change
  • Grass roots movement
  • Execution teams sneak experiments in where possible
  • Gradually start to co-opt management
  • As you chalk up little wins that lead to bigger wins, story-tell your process other teams will want to emulate you
  • The culture of experimentation will start to spread.

Things to do to help your case

  • Do experiments only when the need is great. Keep them small and tight.
  • Communicate early and smartly to other groups
  • Start with smaller tactical projects, aggregate the success into a larger narrative.
  • Pull management into key meetings, make your learning process their learning process
  • Control your narrative. Share what you learnt, and how. Draw connection back to the big goals.

Memorable Quotes

How can I learn about this starting today?
Requirements are actually hypotheses... realising this should be liberating. David Bland
How can we learn just as much with half the time and effort
Don’t build too much — that is always failure mode. Teams get so invested in what they are making for the experiment that they lose sight that their creation is just to learn
The experimentation game is validating key assumptions before building a solution