I have often been asked how I go about persuading sceptical stakeholders (which I’ll abbreviate to SSH) that UX practices including research are the right thing to do – and that they are effective. The alternative is usually touted as either ‘just draw what I’m telling you’ or ‘if you’re so good at this why do you have to take all this time and money to figure out what to do’.
I’ll say right up front that the single most effective method that I’ve found is to get the SSH to attend some user research in person. The SSH will have their own preconceptions of what will and won’t be effective and will often believe they intimately understand how customers think. You can argue and present an alternative view to them based on experience, previous research, numbers, whatever – but ultimately it just comes down to your opinion (sure, with some backup) against theirs. There’s no emotional or visceral connection for them.
Once the SH watches a real customer struggling with an ‘easy and obvious’ interface, or articulating a completely different rationale and way of thinking about what they are doing from the SSH’s assumptions, then that emotional connection is made. Either they choose to accept what they’ve seen and heard or they choose to ignore it. If the latter it’s a different ballgame, but any reasonable person will concede that they have learned something useful and new. I would add also that if for example you are doing 1-1 depth interviews for usability, then the SSH needs to attend at least 3 sessions and preferably more. They need to see that the issues arising are not the whim of a single atypical customer. If they see that a particular issue is raised by even two or three people then the message starts to sink in.
Even so, there can still be some peripheral objections about the methodology or the way the questions were asked, or that the questions didn’t get at the heart of the matter. So there are some things to do to ensure that the viewing experience has the greatest impact. These can be summed up in ‘involve the SSH all the way through’.
Firstly, make sure you understand not only the business objectives of that SSH but also their personal drivers. I’ll take it as granted that you’re balancing business objectives with customer needs in a design, but if you want to take a SSH on the journey with you then you need to know if they are dealing with a similarly sceptical boss they also need to convince, or if they are new to their role and feel they need to prove themselves quickly – or whatever. This understanding will inform your conversations and the supporting material you provide them with.
Even if the SSH has some ideas about design it may be that showing them some options or introducing technical constraints will sow some seeds of doubt about their own invincibility. I’ve found a workshop with a limited number of people from commercial, engineering, design (and whoever is needed – legal, PR etc) can be effective. The idea again is that it’s not just you arguing the toss but a session of domain experts focusing on the issue at hand, working through constraints, enablers and options. At the end of the session there may be outstanding actions for people to go away and find out about – there may aspects of business process, technical possibility or law to be clarified before significant further steps can be taken. In a small organisation this session may just be a few people round a table – in a large organisation it could be a bigger meeting.
It’s important throughout all this to present a humble face. Whilst you may be convinced that a given approach is the right one you need to show that you are listening and considering alternatives – just as you are asking others to do.
When it comes to planning some research then the SSH has to be included in agreeing the objectives, method and conduct of the research. You don’t want them to have that wiggle room afterwards. If the SSH has agreed to all these things and been given ample opportunity to voice any objections or issues then they will be more committed to the process. This doesn’t mean that you have to do everything they ask. You still need to be the expert running the show – the person who knows the right way to do things. So you need to find a way to incorporate their input in an appropriate manner. Sometimes it’s necessary to include a design option that you are convinced won’t work just so that the SSH can see it for themselves and to show that you’re not trying to ‘rig’ the outcomes.
If you look at resources on stakeholder management you’ll find plenty of other techniques that you can use alongside what I’ve described here – and it’s a good idea to do so. Nevertheless if you make sure you are engaging in constructive dialogue, showing that you are listening and exploring options, and involved the SSH all the way through in the planning and execution of the research, then you’ll find it will take you a long way to turning that sceptical person into an engaged ally.
Some years ago at British Airways there was a project to design an interface for a new in-flight entertainment system. My then-colleague and friend Mike Lock was project managing. Mike is the real godfather of digital usability at British Airways. Whilst I set up and developed the UX, Design and Research team, I did so off the back of what Mike had already done to sow the seeds of awareness and need. He says I’m too modest, but I call it as I see it.
For the design of the IFE interface a small number of companies had been asked to present a concept. At the end of the presentations it was clear to Mike that only company X had grasped the issues around information architecture and navigation, although the visual design was a bit off. He was shocked to hear when going round the table that no-one else favoured company X. All the other (more senior) stakeholders in the room went for a more on-brand visual design. Mike felt overwhelmed – like all the big guns were pointed at him and he only had a cardboard shield to deflect the blows.
‘But’, said Mike, ‘the one you all like does look nice, but no-one will be able to use it. It’s not a design where the usability issues can be fixed – it needs throwing away and starting again.’ Sceptical faces were all he saw, but he kept going. ‘Although the design from company X isn’t quite on brand it is usable, and we can fix the design elements relatively easily.’ The meeting broke up with the stakeholders thinking that Mike didn’t get it, and there was no way they were going with company X. But Mike still wasn’t giving up.
That meeting was on a Friday and the group was going to reconvene on Monday to decide what to do. Mike was desperate to find a way to influence the decision towards the one he knew to be correct. Over the weekend he took the images from the presentations and turned them into clickable prototypes using Powerpoint. It was all that was available in those days. He then videod his mum ‘using’ the two interfaces. This was in the days of camcorders that recorded on tape. She couldn’t use the pretty design but got on pretty well with the from company X.
On Monday Mike played the video to the group. Ultimately he won the argument, company X got the contract and re-worked the design, and it was implemented on many aircraft. It left a bad taste in Mike’s mouth though. He had to work too hard to prove a simple point and taken a load of senior shit for it.
Something similar to that scenario has played out again and again over time. There are lessons to be learned.
Firstly you can’t assume that your stakeholders get what it takes for an interface to be usable. It’s one of those contradictions in life – we’ll all swear at an interface that frustrates us, but (some of us) would still build an interface for our own companies that incorporate the same frustrations. It’s human nature – we’re often not aware of the causes of our emotions, and most people don’t analyse exactly what it is that they don’t like about a website or an app. They just ‘know’ that it doesn’t work for them.
Secondly, if you have business people who are (relatively) sane and rational it should be possible to influence their perception of effective design. They do actually want it to work. There are different ways of doing this, and sometimes it depends on the person as to what the best way is. Some people like to review a spreadsheet of analytics following a multivariate test, but usually the best way of snagging a stakeholder is for them to see a real customer being unable to use an interface that that stakeholder thought was ok. It hits at an emotional level that has impact. Get them to watch live research in person – but if you can’t, then show the video. If you can, involve them in the setup of the research so they can’t quibble with the methodology afterwards.
Thirdly, agencies vary in their expertise. Some are better at UI, some excel in IA, some at ecommerce. It’s critical when engaging an agency to make sure they have the expertise to do what the client wants, and to be clear about what success looks like. I wrote another post on why agency/client engagements often don’t work.
The landscape today
Designing for mobile forces the designer to ruthlessly prioritise content and produce a compact design. Users focus more because there’s less to look at, and identify more issues with confusing and irrelevant copy which they would just ignore on a desktop screen. More people are using phones more of the time, but desktop isn’t dead yet. I don’t know if it’s a reaction to compact phone design, but desktop design seems to have gone the other way.
A short while ago I was talking to a senior business manager about her company’s desktop site. It had been designed by an agency and she had complained that there was too much scrolling. The agency had ‘explained’ that it was ‘modern design’. The business manager was right, there was too much scrolling.
It seems to be the vogue to have enormous images, lots of white space, and huge font sizes. If you have an ‘artistic’ site or a particular brand image all of this might be appropriate. However, for most ecommerce or informational sites it isn’t appropriate. Customers want to get in, do their stuff, and get out. They want the experience to be a perfect combination of ease, pleasure, succinctness, entertainment, effectiveness etc. And yes, that does include a site that’s pleasant to look at. But if they have to repeatedly scroll just to find out what’s on offer, or to find the information relevant to them, then it’s not achieving their goals and it’s not helping the company to win their business. There are some sites where I’ll go to read an article and I almost feel like I’ve been punched in the face by the huge font that’s difficult and unpleasant to read.
My perception is that many, if not most agency sites are culprits of implementing a triumph of design over communication. If they built their clients’ sites like they built their own then their clients would go out of business.
The bottom line
I’ve had excruciating debates with got-religion UI designers who can’t bear to see the excellence of their design debased and compromised in pursuit of mere money. They don’t disagree that the design impacts usability – they just think making the design right is more important. I’ve told them I’m not prepared to explain to the CEO that we chose to make lower profits so that we could adhere to the designer’s idea of a nice-looking interface.
I want to be clear that I’m not at all putting down UI. It’s absolutely essential. It’s just not the reason why we do all this work.
The reason for the existence of UX, UI, research, interaction design, information architecture etc etc is to be effective in the mission of the organisation paying for the work to be done, which is usually to make money and/or to communicate. We need to focus on the goals and objectives for the interface, where ‘on brand’ is a primary goal and ‘nice looking’ is a secondary goal.
It all needs to come together. Figuring out what works needs to be based on research, on facts – if anyone can stomach facts in a post-fact world.
I see the same mistakes – as well as some new ones – being made over and again in surveys and questionnaires. Most businesses and other organisations are dependent on surveys to a greater or lesser extent. They use them to find out what customers think of them, or what products they should be developing, or what issues need fixing… etc etc. Yet often those organisations are not getting accurate information. If survey questions are confusing or ambiguous, or constrain answer choices, they will be getting a skewed view of responses. It’s like a political poll asking, for example, ‘which candidate do you like’, rather than ‘who do you intend to vote for’. What do you actually want to know?
My advice is that if you are responsible for a survey of any sort, spend some time getting yourself up to speed with what makes for a good survey, and what some of the pitfalls are. It’s easy when you’re familiar with a topic to ask questions that respondents won’t necessarily understand, and it takes some self-discipline and customer knowledge to avoid the problem.
You can’t always entirely trust the ‘experts’ either. I’ve had many a debate with professional purveyors of surveys about their proposed wording for questions, as I’ve felt that they were
unclear to my particular customers
too similar to other questions
not offering adequate response choices
You need at least to be able to judge whether the professional advising you really knows their stuff.
It pays to test a survey on a small sample before general release – and that means talking to people, and understanding how they interpret the questions, and whether it’s the interpretation that you intended.
There are many books on how to write questionnaires and surveys. One that I’ve read and can recommend is
Don’t expect a riveting read, it’s a textbook. But it does cover the ground with good examples.
Now I’ll describe how I decided to approach a survey question at British Airways that was more complicated than it seemed, and then I’ll give some examples from recent surveys that I’ve filled in.
The British Airways question
I was working on the wording for the ba.com site survey, and ended up with some convoluted logic. It wasn’t convoluted to the people filling it in (hopefully), as the sequence would make sense to them. It didn’t make a lot of sense though to colleagues and others who reviewed the questions, and I had to defend the structure many times.
When customers filled in the feedback survey on ba.com, we wanted to know if they were
a member of the Executive Club (the frequent flyer scheme, abbreviated to EC)
if so, which tier they were (Blue, Bronze, Silver, Gold)
if they weren’t, were they registered with a site login
if they weren’t registered at all
We could have gone with this:-
An Executive Club (EC) Member
Registered on ba.com
The problem with this is that some people don’t know if they are EC members. Generally, those who are know they are, as they’ve gone through the process of joining, but otherwise people could say, “How do I tell if I am?”. They might think that just registering on the site, or buying a plane ticket would give them membership. Internally within BA it came as a surprise to some that there could be this confusion.
It would be a little better to have
An Executive Club Member
Registered on ba.com (but not an Executive Club member)
The problem would still remain though that if you are either an Executive Club member, or registered, but weren’t sure which, you would answer ‘don’t know’, and then we wouldn’t have known whether you were registered at all.
It would also potentially confuse some Executive Club members who would think that they are both a member, and also registered.
What we went with was this.
<Do you have a login for ba.com?>
If the customer said no, skip to next question, if they said yes, then we asked
<Are you an Executive Club Member?>
If they said no, then they were registered, but not EC. If they said yes, we asked what Tier they were.
It’s still not perfect, but it does at least mean that we got better quality results on whether people were registered or not (without having to interpret what ‘registered’ means).
One of the most frustrating things that I see in a survey is when none of the answers apply. I’ve lost track of the number of times I’ve told people that they need an ‘other’ option. Sometimes there is a closed set of potential answers – either you bought something on this visit or you didn’t.
I filled in a survey after having attended a Rock weekend at Butlins (it was great). I answered a question saying that the experience could have been better, and then I was presented with this question asking what could be improved.
The problem is that my reason wasn’t any of these things, yet the only way to progress is to pick one. So I did. If they had an ‘other’ option which allowed me to enter text, I would have let them know that some of the behaviour from other guests who had had too much to drink had been annoying. But they’ll never know.
Butlins also asked this –
The drop-downs were the same for each, and showed this-
I would expect Butlins to have a good handle on what matters to their customers in general, but for me, Internet access can be a deal breaker for a holiday, and it’s not in the list. My wife always wants to know if there is a hair dryer in the room. These may not be our number one issue, but if you’re going as low as number five, then you risk missing out.
First Great Western (FGW) ask about reason for travel.
I think it’s reasonable to assume that business, commuting and leisure account for the majority of train journeys. But what if you’re travelling to a funeral, or other reasons? It may be a small enough proportion that FGW think it’s not worth making the survey more complex by having an ‘other’, which they are entitled to do. But each time a respondent has to think harder about a question, it’s an additional point at which they are likely to drop out.
It’s fairly common on a site survey to ask what the purpose of visiting was (again, it can be problematic to assume you know all the answers), and then to ask whether you were successful.
Maplin and First Great Western both use Foresee to serve their surveys, and they take a different approach to each other.
You can see that Maplin offer a ‘partial success’ option, which FGW don’t. It’s likely that for many sites a significant proportion of customers will be partially successful. With FGW, I might have come to buy a train ticket, and did so, but it wasn’t at the price or the time that I wanted. I count that as a partial success. By only offering the binary choice customers are forced to make a qualitative judgement which way to vote. When that happens, the survey owner loses useful information. That’s especially so if the customer votes for ‘success’, as then you don’t know there’s something that was an issue. You can still ask ‘was there anything else that would have improved your experience today’, but then you have a pile of verbatims, and the issues are lost from the headline reporting of the success question.
I filled in a Which? survey about my car. This was one of the questions, asking how I financed the purchase.
As with any web text, survey respondents don’t necessarily read the detail of each question. They will scan, and stop at the first answer they think applies to them. In this list, the choices for ‘Personal Contract Hire’ (the first option) and ‘Personal Contract Purchase’ (further down) are quite similar, and unless you are a wizard on car finance you have to read the detail to understand the difference. I suspect that Which? are going to get more responses to the first option than actually applies. You’ll get people like myself who have ‘Personal Contract Purchase’, who read just enough to decide that the first (and wrong) choice, applies to them.
In such cases, the two options should be next to each other. It doesn’t entirely solve the problem (it would further help if the order was randomised), but there’s more chance that people will spot the alternative, rather than just going with something that looks close enough.
Time and again I’m filling in a survey and think ‘what do they mean by that?’. Often, these will be technical questions, or ones requiring a subjective judgement but no guidance is offered.
On Google maps I often answer questions about places I’ve visited. It seems that Google has a standard set of questions, some of which do puzzle me.
What you and I consider to be ‘trendy’ can vary. Google may be ok with this, but I usually just go for ‘not sure’.
I always struggle with this one. Is a more expensive pub ‘upscale’? There are probably venues that clearly are, like the Ritz, and those that clearly aren’t, like McDonalds, but where’s the line?
Shouldn’t the question be ‘Is this place popular with travelers?’. Each time I see this I have to stop and think about what it means. Anyway, how can I tell who is a traveler? Does it mean tourists? People just in transit?
Google seem to be experimenting with images as well. Here’s one question I was asked.
I can well imagine that Google could be experimenting with the automation of image choices. Nevertheless, whilst asking me which image is more ‘helpful’ (what does that mean? Should it be ‘representative’?), the picture on the left is of Windsor, rather than Slough. The picture on the right is of some offices just outside Slough. I don’t think either are ‘helpful’, although the one on the right is at least of Slough.
There could be some rhyme and reason to all this. All I’m doing here is pointing out some of the confusion these things cause to me, and readers can decide for themselves whether it’s useful or relevant.
I filled in a Which? survey about pet insurance. We have a cat. The survey asks what type of cat it is.
As you can see, the response is selected with a check-box, but unlike radio buttons, check boxes are not mutually exclusive. This doesn’t make sense, as the cat can only be of one type. If you select more than one type, you get an error message.
This could easily be avoided by using radio buttons. Whilst most people aren’t going to pick multiples, if you have a cross-breed you might pick two boxes, or you might pick one, and then spot a more accurate description, and go for that. The initial question also does not specify that only one choice can be made.
Viking sell office supplies. They also use Foresee to serve their survey. At the end of the survey this is what you see.
It’s good that it says thanks, but where do you think ‘Contact Us’ links to? I’d assumed it would allow me to contact Viking, as I’m answering their survey, but it actually links you to the Foresee site.
Many years ago we discovered that some of our customers were contacting our survey supplier in the mistaken impression they were contacting us. Worse, the supplier was responding directly, rather than passing the messages back. It needs to be clear who the contact is with. Own your own survey.
A couple of positives
I’ll finish off by pointing out a couple of positives things I’ve seen.
This is a good(ish) sign-off from Butlins, thanking the customer for taking the time. It’s a shame that the message about entry into a prize draw is so small and barely readable. More could be made of it – and a happy picture would add to the experience.
Customers who respond to a survey may be inclined to help out with further research. FGW ask if customers are willing to do so, and it’s possible to build up quite a database of willing customers that can be segmented by the responses to the survey. The wording could be tightened up and made a bit more visually appealing though.
Finally, from the Which? survey on pet insurance, there’s a question about the age of the cat. It’s good that there is encouragement to answer approximately if you’re not sure. It gives that bit of permission not to sit and agonise about being precise.
This question reminds me of applying for car insurance years ago. Many insurers asked for the date when your licence was issued. In fact, all they were interested in was whether it was issued more than a certain number of years ago. It would have made my life easier if they had just asked that.
When I started out in UX there were some designs that I was convinced would work, some I wasn’t sure about, and some that I was convinced wouldn’t work. Simplistically, I think of it like this.
Then, as I learned more, and in particular did more research, I found that I wasn’t always right. Odd that. This had the effect of increasing my range of ‘might work’, as there are so often surprises. So the model changed to this.
There were fewer designs that I thought definitively would or wouldn’t work (or would be better than an alternative), but I was more confident of those judgements.
As time went by (maybe there’s a song there…), people started to say things like ‘look, just because you did some research five years ago, doesn’t mean this design will fail now’. I don’t think it was my communication style that earned this response (but reference Discussing Design). So I tried to explain that there were some things that you discover through research that don’t change, or don’t change much, and that there were some things that were a result of context and time, and would be more likely to change over time. I tried to explain that if I was citing older research, it was only to illustrate a principle. Sometimes if I thought a design wouldn’t work it was on the basis that it broke a principle, and people assumed it was because of some old research.
I’ll give you some examples.
What gets old
Some years ago when we did usability testing on ba.com, it was common to find people who were surprised that you could book a hotel on the site. These days, it tends to be the other way round. People are used to the cross-sell. The fact of being able or not able to book a hotel on an airline website isn’t related to human psychology. It’s cultural and can be learned, so you can’t rely on old research.
Another more current and general example is the hamburger menu. It’s commonly used in mobile designs, although any research that I’ve seen says it’s most effective when used in conjunction with the word ‘menu’. Some designers maintain the style for desktop users. This makes no sense for a number of reasons. One is that since you have more space, you should expose the main navigation elements, rather than make users search, but also because you get many older users of desktop sites who have not learned the hamburger style. This is something that is likely to change over generations.
Another example that I found interesting was when we were doing some research in the US. In the checkout process the customer was offered the opportunity to get a credit card on which they would earn miles, and they could use it to pay for this flight. In our mockups we used the words ‘instant credit’. This was just after the sub-prime crash around 2008. Users reacted strongly against this phrase. When we changed the words to something more like, ‘get a card, earn miles, pay now’, the reaction was quite different. It was the same offer, just different positioning influenced by what was going on in the world at the time. It’s quite possible that at another time ‘instant credit’ would be more appealing.
What doesn’t get old
One example of what’s not going to change is the impact of grouping and separation, a fundamental principle of human cognition.
Depending on screen size and resolution, a user could quite possibly scroll to a position like this. At a glance (which is all it should take), it looks like the upper price relates to the wheel below. They are spatially closer, and there is a lot of white space above the price. There are faint separating lines, which a user may or may not notice depending on the quality of their screen and eyesight. It’s confusing.
There is much clearer grouping of associated information, and separation from others.
Even better is Argos, using a card style.
So, to bring it back to old research. If I were to look at the page of results from Cycle Surgery, I’d say that at the very least it could be improved, not because of any research, but because it breaks the rules of how people look at things. I don’t need to do any research to figure that out. Items close together are related (we assume). Items with a strong separator are not related. Sometimes a poor design can cause conflict between these principles. Argos overcomes these issues.
When people look at a list of flight choices, the vast majority start by trading off price and time. The cheap flight looks good, but you don’t want to get up a 4 am to catch it. 11 am is really convenient, but too expensive, so the 9 am is a reasonable compromise.
That’s the first calculation. After that, people will look at seat availability (can we all sit together), or is it the aircraft type that I want to fly on. Sometimes there’s a choice of airports, and so on. These things aren’t to do with the basics of human psychology, but they are deeply rooted, and are unlikely to change anytime soon, unless other factors become a lot more important for reasons I can’t currently think of. But it could happen.
I lost track of the number of times we had to reinvent the wheel to discover the same things because someone new was working on the design. From the point of view of the business, it’s not the most efficient use of corporate memory, but from the point of view of the designer, it does mean that they get validation from users rather than some bloke in the office who says he’s seen it all before. There’s a balance to be struck there.
When you do research, some findings will be dependent on context, and can change over time, or with different personas. Others are constant, based on how our brains work. The third category is in between, where the design pattern might change in the future, but it would take something significant for that to happen.
When you review old research, ask yourself which category those findings fall into. Take it back to principles of psychology and design. The answer doesn’t dictate that you follow exactly the same design as was tested previously, but the fundamental approach will be directed by it.