This series was initiated as a place for you to learn more about service design and journey mapping software. Our co-founder Marc Stickdorn and the Smaply team share their experience on how to embed and scale service design in organizations. The sessions usually kick off with a short introduction to the focus topic to bring everybody to the same page, followed by your questions and deep discussions of best practice examples.
On this page you find the recording as well as the transcript. Additionally, the session is also available on Spotify, iTunes and Google Podcasts.
Overview
- [01:35] Introduction
- [07:15] How do you get to good quality results with limited budgets?
- [16:58] When do you suggest which UX research method to use?
- [20:20] What are examples of ethnographic research?
- [24:00] How do you calculate the cost for each research method?
- [28:05] How do you review research data to highlight assumptions, without questioning others’ abilities, if someone else collected the data?
- [31:45] Can we get a full GPS map that records a full route without having to record all the experience points?
Introduction
We always need to understand experiences in context.
Let’s start with the context of research and that is customer satisfaction. There’s a very simple academic model for customer satisfaction, called confirmation/disconfirmation paradigm. This means that we compare our expectations with our experience at any moment. If they match we’re satisfied. If the expectation was higher than the experience, we’re dissatisfied. If the experience is higher than the expectation, we are delighted, we are very satisfied.
This model suits a customer journey map perfectly. All the steps you have until you actually start using surveys, physical products or digital products drive the expectations.
While you’re using it during the service period you compare it. What you see later, when you chat with people, when they share their experiences – in social media for example – is the result of it. Visualize the level of expectation, the level of experience – if they match we’re satisfied. Too often it happens that the experience can’t keep up with an expectation. This results in dissatisfaction.
Or we drove expectations too high, which is often a problem of siloed organizations. There might be a really great team in marketing and sales, that others can’t keep up with. They are over-promising, causing dissatisfaction. What we try to understand when we do research is: Do we only want to focus on the bit of experience? Do we want to understand the satisfaction through the result of it? Or are we looking at the entire journey including the expectations?
The level of expectation varies widely according to different groups you do research on. That’s what we use personas for, to understand what the level of expectation is there.
What we can obviously do is to increase the experience. Or we can decrease their expectation. This is something very human and probably everyone of you did it before. When you invite someone over for dinner, what do you tell them?
Just think about that. Did you tell them that you’re going to stand in the kitchen for hours trying to prepare the best meal ever, or did you tell them that you’ll try something new and not sure if it works. That you’ll just do a little bit, the others shouldn’t expect too much. We’re levelling down expectation. That’s something very human and we can also use that in service design to set a level of expectation.
The result of that very often is the emotional journey. Every step of the journey we constantly met our expectation with our experience. Constantly we could measure how satisfied people are, we see patterns in that and I show you later tool that helps you to actually exactly do that.
Let’s talk about research. Research is one of the core activities in service design. I think it is the core activity in service design.
If you don’t really understand user needs, if you don’t really understand what the problem is, you might design great products, great services that no one needs. Research is at the core of what we’re doing.
I just give you a rough overview of processes and different methods we use. If I work with clients they’re often very familiar with any kind of quantitative methods. They’re good at doing surveys, they’re good at tracking users, any kind of big data they do. But what they are sometimes hesitant to do is qualitative research.
I don’t think it’s it’s either/or. I don’t think there should be a fight because both suit a different purpose. We need quantitative research to monitor on KPIs over time. To see how we are doing – are we increasing or are we decreasing, do we have a problem in a specific channel?
But we need qualitative research to actually get actionable insights, to understand why we see certain effects in quantitative research. The methods we use there are very diverse, both in quantitative and in qualitative research.
How do you get to good quality results with limited budgets?
That is one of the issues every organization has. They don’t have enough budget to do proper qualitative research. What we can learn from academia – that goes back to the 70s – is a concept called triangulation. Because all the methods we are going to talk about in a minute, have a bias in them. The only way to level out these biases is by using different methods.
If you do an interview, people will never tell you the truth. They give you answers that are socially acceptable, that they think will please you. Simply because they’re nice.
People will behave differently as soon as they’re part of a research project – it’s called the researcher bias. And there are loads and loads of these biases out there. The only way to level it out is by using different research methods. Also use different data types. How do you actually collect your data? In which form? Do you take notes – text? Do you take photos or videos or audio files? Do you make sketches? Do you collect artifacts ? The richer your sets of data types are, the better and more robust your research is.
This is because it allows you to do the third level of triangulation, which is researcher triangulation. We always talk about slipping into the shoes of our customers but we forget that we are wearing our own shoes. So again, to level out biases of us as researchers we actually need different researchers. The only way to understand data that other people collected is by having a rich data set. So you not only depend on the data that someone wrote down, but that you can take a look at the video or photo or you find an artifact.
There are a few examples of research methods that we like to use in service design when we try to understand customer experience, customer expectations. We like to do contextual interviews. Interviews in the situative context.
You might think with the Covid-19 situation it’s not possible because we can’t go out. But maybe you as a researcher can’t be present there. In some countries you can, you just have to keep physical distance. But you can use your smartphone, do video calls in these situations. Get as close as you possibly can.
We use non-participant observation, the nickname for that is the fly on the wall. You observe situations, you learn from body language, from processes that you see.
We use a very traditional ethnographic approach, like participant observations. The nickname of it is shadowing, where we follow people throughout their process. We do that also with employees, I think I say that in every every session that we do here – customer experience is just one part, at least as important is employee experience. You need to understand the experience of the employees as well.
We do work a lot with work-alongs of employees, working together with them, kind of an internship. If I talk to management I often sell it through this crappy TV series Undercover Boss. We do the same thing, just we don’t use a fake moustache or something. But you really need to get as close as possible to your front line because that really makes or breaks the customer experience.
We use new approaches like mobile ethnography. We’re going to go a little bit more in depth on that in a few moments. Also very simple technologies like auto ethnography – that basically means to become a customer yourself.
We collect primary data and of course we also use secondary data – any kind of data that you already have. What we try to achieve is to come up with interpreted data. Stuff like key insights, user stories, jobs to be done, whatever you might be using. One way to get there is through visualization. That’s where our tools come into play: our personas, our system maps, our journey maps – they help us to make sense of data, to synthesize data, to analyze it.
The first step when we do research often is visualization of collected data. Make it tangible on the wall. Create a research wall, you can also do that digitally.
The last thing on my presentation is when we go out we often start with explorative research, we want to learn something new. We want to get answers on why we see certain effects. Hopefully you go out without assumptions, with an open mind. As soon as you have these kinds of interpreted data – key insights, jobs to be done etc., you might want to do confirmatory research where you do a quantitative research afterwards to confirm what you’ve learned. That’s optional.
Whenever you start with assumption based journey maps you need to do research to actually base it on facts. It’s dangerous to take decisions based on assumptions.
I would like to give you a very quick overview of ExperienceFellow. The research method behind it is mobile ethnography. Mobile ethnography allows you to invite participants. That can be users, customers, citizens. And it also works great with employees. You can invite them to join your project through an app, and then they take kind of a diary study. They can take a new step and document it with text, photos, videos with a five-point Likert scale for satisfaction, importance etc.
As a researcher you get this data in real time visualized as journey maps. I would like to show you briefly how this looks like. It is from a demo project on the public transport experience in Amsterdam. What you see here are 14 different participants, very typical: a few stop after 2 or 3 steps – around 50% dropout rate – and others give you a lot of data. What you see immediately is what was positive in the journey and what was negative. Remember the emotional journey? That’s exactly what you get out of this.
You have loads of different ways how to tag the data, how to go through it, how to work with it. Look at it on the map, see where people documented it, you can zoom in – take a look and sometimes you see clusters of negative experiences, clusters of positive experiences. Then you can again look at the quantitative data behind it to find out why something was positive, what the story here is. The nice thing about this way of doing research is you don’t need to be present by doing the research. You can do research projects with hundreds of participants. But it’s still a qualitative approach because you have photos, videos, texts, etc. connected to that.
If you'd like to get deeper insights on how to do customer experience research or how to structure and make use of the data afterwars, here you find more information.
When do you suggest which UX research method to use?
Never use just one method, always use two or three methods. Think about triangulation.
I just look in our little book. There you will find five different categories that I put the research methods into. I suggest to take two or three methods, one method each from different categories.
- The first one is desk research. Desk research means any kind of research you do at home. With GoogleScholar, asking colleagues, finding existing research in your organization.
- The second one are self ethnographic approaches. Something like auto ethnography, online ethnography.
- The third category are participant approaches. Participant approach is something like a participant observation, contextual interviews etc.
- Number four are non participant approaches, that is something like mobile ethnography or non-participant observation.
- The fifth category is something that people often do not think about when they hear research. Category number five are co-creative workshops. Co-creative workshops are a research method. Think about a journey mapping workshop. You can invite 10-20 customers and create a journey map together. You do this to learn from your participants. That’s actually a valid research method.
I recommend to not only choose the right method but choose two or three from different categories. Think in research loops, don’t collect all your data first and then start analysis. But rather do a quick loop – maybe an hour, maybe a few hours – and check back if the methods you picked are working. Do they bring data that is valuable for you? If not, change your methods.
What are examples of ethnographic research?
I was working for a manufacturer of ticket machines. These were ticket machines for train tickets, for subways etc.. When I started working with them, I asked them how they do user research. They were very proud and they happily told me about their new research lab and showed it to me. It was absolutely fantastic. It had cameras everywhere, sensors for face recognition, you could analyze if people were smiling or if they were confused etc.
I then asked them: “Do you also do any kind of contextual research?” They said: “No, why? We have a lab, we invite people to come to us. That’s way more convenient for us.” I understand, but the issue with that is, in the lab you can only ask the questions that you know. That you are aware of. So I challenged them and asked them: “Should we go out and do some contextual research to see if we come up with new things that you’re not familiar with – in an hour?” They were a little bit hesitant at the beginning but then they said: “Yes, sure, let’s go out.”
We went out and it took half an hour until the following situation appeared: There was a young lady with her daughter and her shopping bags. She was standing in the queue until it was her turn at the ticket machine and when it was her turn she put down the the bag, she entered the ticket she wanted, she paid with credit card, she entered the credit card pin and in the moment when she had entered her pin, her daughter ran away. Now the mother had to take a decision: daughter or credit card? Obviously she ran after her daughter, but it’s not a situation you want to put your user in.
This system was not designed to cancel in this moment. For the researcher this was new. They never thought about it before. That’s a thing if you do contextual research, you discover things that you never thought about. It’s exploratory research – once you’re aware of that, you can think about a prototype in a lab and then evaluate your research if your solution works, but at some point you want to go out again.
We’ve done it in hotels, we did for airlines, we did it for governments. It’s a very vast field; you can actually research anything.
How do you calculate the cost for each research method?
Estimation is the right thing, because it includes one big aspect and that is how many people you ask. How many people do you need to ask? That’s the big difference between quantitative research and qualitative research. Quantitative research you can calculate upfront, how many people you need to ask to have a representative sample size. In qualitative research we use a different concept. Qualitative research doesn’t care if 72.4% or 72.3% of our users have this problem, we want to find out what the five biggest issues people have are, and maybe end up with a ranking of these.
The concept here is called theoretical saturation. Theoretical saturation means if you ask 20 people and out of these 20 people there is a clear pattern in your data. Maybe there are three big problems and the twenty first one you asked just confirms what you know already, the twenty second confirms just what you know. Then you reach theoretical saturation, which means asking more people will not bring you any new results. It just confirms what you already know. Your theory is saturated.
If twenty – twenty-five people that you asked show a really strong pattern, the probability, that the next twenty – twenty-five people tell you something completely different, is neglectable.
But only if you selected your participants randomly. That’s very important. If you ask twenty-five friends it might be different, but if it’s selected randomly, no worries.
How do you estimate upfront how many people you need to ask? You don’t know. That’s the thing – because the strength of the effects will determine how many people you need to ask. If you have three big issues maybe ten people are enough and and it confirms you reach the theoretical saturation. If there are twenty minor issues then it takes longer to reach theoretical saturation. That’s why it is hard to calculate the costs upfront for every method.
However, what you can do is you can calculate the minimum; like what is the minimum amount of people you need to ask. My minimum is always ten to twenty – that’s the minimum, that’s something you should always ask, observe etc. Have this minimum and maybe think about a worst case scenario – probably the reality is in the middle. That helps you to estimate it but it’s really hard to say upfront how much it costs.
Have a budget, start with two or three weeks of research methods, see if you can find a pattern. If a pattern emerges, confirm it through a few more and see if it also gets confirmed through the other research methods. If you work in that way at least you don’t spend too much money, or waste money with doing a lot of research that no one needs. That always happens if you first do your data collection and then afterwards do the data analysis. So think in loops and the closer or the faster these loops are, the higher the probability that you identify your pattern early enough in your process.
How do you review research data to highlight assumptions, without questioning others’ abilities, if someone else collected the data?
Triangulation is again a thing, if I have to work with data from other people I first look if they follow the basics of doing good quality research which is triangulation. Did they use different methods, different researchers, and different data types? If you find a strong pattern within this triangulation, say there is one researcher that finds out certain things and it’s only this one researcher, it’s probably hard not to question the ability.
Very often there is a hidden agenda in play. It might be that there is someone from senior management who wants to make a point about something. Could be, but triangulation is the best way to do it and the nice thing about that is you can just show the data, let the data speak: “Look we found these things, we can triangulate; triple triangulation: methods, researchers, data type. We can be pretty sure that this actually exists. These other points we have doubts because there were only single people finding it.” If you visualize that, the data will speak for itself. But it is a political thing.
Follow-up question: What if they didn’t follow basic research practice?
Honestly – you can’t come up with great insights from crappy data.
At some point you have to tell people that they didn’t follow basic research practice. And if you have a very biased data set, all your results will be very biased in that direction. That’s why we have these standards of doing good qualitative research. At some point you have to tell people that this doesn’t work – even if it’s hard. I don’t have a good answer for that. You have to question, what is more important. Values are only values if you stick to them, even if it’s hard.
Do you recommend that customers also use the ExperienceFellow app?
Yes, absolutely, it’s actually great to do with customers. Depending on the context you need an incentive for them to participate. Why should they participate? Obviously. It’s like any other research method, you always need to think about how to incentivize people so they’re motivated to take part. But yes, absolutely. That’s what it was designed for.
Can we get a full GPS map that records a full route without having to record all the experience points? I love the GPS element of ExperienceFellow, but you currently have to enter the experience point to trigger the GPS recording.
No, you can’t.
It was by design, because we want that people are in control of that.
That is why you can even set up project without GPS, or without the ability to take photos or videos. There were research projects done in hospitals, where data privacy is a really big issue and it works for that. We put a limit there. We thought it should be triggered and not used to spy.
Also another good reason for that is that in some places data costs a lot of money and I would not be willing to participate in a study that means my GPS is running the whole time that app is there. Because it’d cost me a fortune. Data is not cheap.
And now, what's next?
Check out the other Ask Marc sessions about different topics of human-centric work, like multi-persona maps, creating CX insight repositories, and many more.