The temptation to form premature theories upon insufficient data is the bane of our profession. – Sherlock Holmes, fictional detective
Think back to a time you looked at your monthly sales or engagement numbers and wondered how they could be improved. You probably looked at some metrics or some customer feedback - then in a regularly scheduled meeting you discussed how you might improve them with your team.
You present your thinking, hoping everyone will quickly agree the next set of company priorities. But it didn’t quite happen that way.
Your interaction designer thinks the onboarding is too weak and that you should focus on fixing usability issues.
Your marketing director is concerned that the brand value proposition isn’t consistent and that you should focus on your brand and messaging.
Your sales lead complains that the product is lacking all the features he needs to sell the product properly.
The support team claim the product is unstable and that downtimes are too frequent. They recommend focusing on stability first and foremost.
This misalignment is not unique or even uncommon. Every person in your organisation instinctively and unavoidably interprets your customer needs based on the information they face on a daily basis combined with their area of expertise and their personal confirmation bias.
Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms one’s pre-existing beliefs or hypotheses while giving disproportionately less consideration to alternative possibilities.
Wisely, as your passion for creating great customer experiences requires, you decide to just find out from your users. You speak to plenty of them too, all so you can be completely confident that you have identified the right answer to your questions. And naggingly you know that they’re almost the right answers.
One small thing might make them not completely right: They still need to be interpreted - without bias.
It leaves you and you alone as the person responsible for gathering and interpreting the user feedback. Now, more importantly than ever, you must understand and minimise your own personal bias in order to ensure the right path is followed.
No pressure.
Like it or not, when trying to interpret customer data, every one of us are susceptible to bias and we need to learn to factor that into how we interpret our findings. We need to stand above our bias, to understand the bigger picture, to be able to assume a ‘helicopter view’ if you like - 50,000 feet up in the air looking down.
So how do you get to the helicopter view?
Here are a few tips:
Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom - Clifford Stoll
Know how much you want to invest in generating new customers for your current product, supporting existing ones and extending your product to cater for new audiences. This will help focus your research from the beginning.
Once we know something, we find it hard to imagine what it was like not to know it - Chip & Dan Heath
Listening, not talking, is the key to understanding your user’s needs. Capture all their points, not just the ones you agree with or have encountered before in other interviews.
Empathise with your users, but be careful to avoid leading questions (we’re working on that post - make sure to follow us to find out when it’s live).
The ones you always hear the most say nothing that’s worthwhile - Martin Walkyier (Singer and poet)
The Dunning-Kruger effect is a well proven theory in psychology about how people who have a great deal of knowledge in a field tend to underestimate their ability in that field, whereas those with a lower understanding of the same field tend to falsely believe they are in fact better than they are.
When engaging with users, humility and self-awareness of your potentially damaging role in the UX research process will help eliminate your tendency to search for confirmation of your own bias.
If you torture the data long enough, it will confess - Ronald Coase, Economist
One statistic proves nothing and allows you to draw false conclusions all too easily. Conversely, multiple statistics start to paint a picture, an overview of a problem.
Qualitative user research helps you focus quickly on the right questions, but could lead to error if used to define your roadmap. Quantitative user research lets you prioritise and really focus your roadmap with enough data to order it well, but can cause you to lose that human touch needed for great design.
Look for corresponding data and a mixture of research types to validate your conclusions.
Experts often possess more data than judgement - Colin Powell
Once you have results, create and rank them in order of your user’s priorities. It’s time to get your team back in. But this time, don’t talk about improving your numbers (abstract) - talk about solving user stories. Get multiple peoples’ input on how they would solve the stories.
Invite the team to interpret it in their own way and look for the patterns between the data, the stories and your team. People will quickly align around those problems in a way they were never able to before.
Thanks for sharing! :)