Hey, Qual meet Quant. Quant this is Qual.

If I had one piece of advice to give user researchers and designers in general, it would be to be a bit less dependent on qualitative findings alone. It affects the quality of your work (See what I did there? No, OK let’s move on.) and it affects our general credibility as a discipline.

When I worked as a freelance consultant, most of my recommendations were based on qualitative findings. For many of my clients this was fine as the improvements I recommended were really rather glaring. The 700% increase in one of my client’s online sales after half a day’s work is still a point of pride for me (which is why I managed to shoehorn it in to this post somehow).

Skyscanner was one of my clients back then. Now that I work for Skyscanner full-time I’m aware of just how ill-advised some of my initial recommendations were. Don’t get me wrong. Some of them were ace. But if I had pushed harder to get access to their analytics I could have been more useful. I realize this now because I have access to more data than I can comprehend.

Most people begin their journey as a user researcher by running usability testing sessions. This is an epiphany for many people. It can be very powerful. But it’s very easy to mistake a user session for real life. It’s not. You’re just trying to get a bit closer to the real life use of your product. Have you ever had someone get up from a user session and make a sandwich? Thought not. That happens sometimes when people are using your product in real life.

There is a tendency for observed findings from usability tests to be presented as irrefutable facts which need immediate attention. The confusion that Moira experienced with your filters should become the company’s number one priority right away. But when you get round to resolving Moira’s issue you can’t find a single metric that has been improved as a result.

“They obviously didn’t do user testing”

On Twitter and LinkedIn I see fellow UX people make very broad statements about some of the design decisions they see in everyday life. The latest example I saw was the decision from ASOS to force its app users to sign in or register before adding a product to their basket. I have to agree that this isn’t the greatest user experience I’ve heard of.

But I would be reticent to argue that it was a bad business decision unless I had access to a lot more data than I do about their app usage. One person argued “They obviously didn’t do user testing”. Now I’m not a betting man, but if I was forced to bet my house on one or the other, then I’d bet that ASOS have done some user testing of their app. I doubt I would get good odds mind you.

The thing is, user testing is not the way to make a decision like that. You don’t need to run a user test to know that you’ll get fewer people adding things to their basket with this move. You might well get a lot of uninstalls. ASOS is a pretty big brand and big brands can get away with decisions like this at times. My local bike shop can’t, that’s for sure. It’s all about how big the carrot is, but let’s not get into my carrot analogy.

You will be rewarded with riches

There is a an argument I often hear in which any UX improvement will be rewarded with riches beyond the organisation’s wildest imagination. In some cases people make the case that a rather minor issue will cause users to leave in droves. This argument is often ludicrous and makes you look a bit foolish in front of other people within the business. If it were that simple, your job wouldn’t be necessary.

It’s very easy to make a case based on some qualitative findings you have and then simply fill in the quantitative blank space with a hastily drawn sketch of a treasure chest full of diamonds and pearls.

The fact is that the effects of some of the improvements you think are massive, can often barely move the needle in terms of product performance. But likewise some things you thought were minor are actually massive. To my mind, it’s partially a user researcher’s job to help the business understand which are which at times.

Getting comfortable with quant

When I agreed to join Skyscanner one of the big draws was getting unlimited access to all of the data they had. Qualitative research is just one way of finding out more about users and user testing is just one way of finding improvements that can be made to products. In order to persuade the business to fix a problem that is difficult to resolve, I need to help them see the size of the problem. So I need to start by trying to find the size of the problem myself.

I was always keen to see the data but I should have been demanding access to it previously. Instead I was too cosy with the qual I had. It was easier to imagine treasure chests than to find out whether they were there or not.

I should be clear here and explain that I don’t mean you need to put a value to your findings. Instead I mean you should try to size some of them.

Data doesn’t mean numbers

Having said all of this I also find people in other disciplines who are very reliant on quantitative data, can find it hard to embrace qualitative data. People say ‘data’ and they often mean ‘numbers’. Interpreting those numbers becomes a lot easier when you regularly engage with qualitative data. Some jump to all sorts of conclusions when looking at graphs that can easily be better explained if you have a little more empathy and understanding.

My favourite example was an email conversation regarding the use of our language and currency controls. It turns out that the predominant use of these controls in the US was to change the language to Spanish and leave the currency at US dollars. That wasn’t a surprise to me and the reason I’m saying it publicly is I believe it’s hardly a surprise to you.

The first conclusion I heard proposed to explain this was that Spanish holiday-makers were using our product while in the States.

OK, so a little more thought about this context would have revealed who is really using the controls in this way. My point is that by watching and engaging with individual users more often, rather than just looking at graphs and charts alone, you build a deeper pool of contexts you can easily understand. You can then more often look at user behavior analytics and understand what’s happening.

Try a bit of both

If you are in a product role and you work with quant or qual, try spending a bit of time and effort with the one you’re less comfortable with.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.