How to convert sceptical UX stakeholders to allies

Image from rawpixel.com

I have often been asked how I go about persuading sceptical stakeholders (which I’ll abbreviate to SSH) that UX practices including research are the right thing to do – and that they are effective. The alternative is usually touted as either ‘just draw what I’m telling you’ or ‘if you’re so good at this why do you have to take all this time and money to figure out what to do’.

I’ll say right up front that the single most effective method that I’ve found is to get the SSH to attend some user research in person. The SSH will have their own preconceptions of what will and won’t be effective and will often believe they intimately understand how customers think. You can argue and present an alternative view to them based on experience, previous research, numbers, whatever – but ultimately it just comes down to your opinion (sure, with some backup) against theirs. There’s no emotional or visceral connection for them.

Once the SH watches a real customer struggling with an ‘easy and obvious’ interface, or articulating a completely different rationale and way of thinking about what they are doing from the SSH’s assumptions, then that emotional connection is made. Either they choose to accept what they’ve seen and heard or they choose to ignore it. If the latter it’s a different ballgame, but any reasonable person will concede that they have learned something useful and new. I would add also that if for example you are doing 1-1 depth interviews for usability, then the SSH needs to attend at least 3 sessions and preferably more. They need to see that the issues arising are not the whim of a single atypical customer. If they see that a particular issue is raised by even two or three people then the message starts to sink in.

Even so, there can still be some peripheral objections about the methodology or the way the questions were asked, or that the questions didn’t get at the heart of the matter. So there are some things to do to ensure that the viewing experience has the greatest impact. These can be summed up in ‘involve the SSH all the way through’.

Firstly, make sure you understand not only the business objectives of that SSH but also their personal drivers. I’ll take it as granted that you’re balancing business objectives with customer needs in a design, but if you want to take a SSH on the journey with you then you need to know if they are dealing with a similarly sceptical boss they also need to convince, or if they are new to their role and feel they need to prove themselves quickly – or whatever. This understanding will inform your conversations and the supporting material you provide them with.

Even if the SSH has some ideas about design it may be that showing them some options or introducing technical constraints will sow some seeds of doubt about their own invincibility. I’ve found a workshop with a limited number of people from commercial, engineering, design (and whoever is needed – legal, PR etc) can be effective. The idea again is that it’s not just you arguing the toss but a session of domain experts focusing on the issue at hand, working through constraints, enablers and options. At the end of the session there may be outstanding actions for people to go away and find out about – there may aspects of business process, technical possibility or law to be clarified before significant further steps can be taken. In a small organisation this session may just be a few people round a table – in a large organisation it could be a bigger meeting.

It’s important throughout all this to present a humble face. Whilst you may be convinced that a given approach is the right one you need to show that you are listening and considering alternatives – just as you are asking others to do.

When it comes to planning some research then the SSH has to be included in agreeing the objectives, method and conduct of the research. You don’t want them to have that wiggle room afterwards. If the SSH has agreed to all these things and been given ample opportunity to voice any objections or issues then they will be more committed to the process. This doesn’t mean that you have to do everything they ask. You still need to be the expert running the show – the person who knows the right way to do things. So you need to find a way to incorporate their input in an appropriate manner. Sometimes it’s necessary to include a design option that you are convinced won’t work just so that the SSH can see it for themselves and to show that you’re not trying to ‘rig’ the outcomes.

If you look at resources on stakeholder management you’ll find plenty of other techniques that you can use alongside what I’ve described here – and it’s a good idea to do so. Nevertheless if you make sure you are engaging in constructive dialogue, showing that you are listening and exploring options, and involved the SSH all the way through in the planning and execution of the research, then you’ll find it will take you a long way to turning that sceptical person into an engaged ally.

Basic usability issues still plague users

Photo by jcomp – www.freepik.com

Call me an old romantic if you want but I would have hoped after all this time of interaction design that some of the more basic usability issues wouldn’t crop up so often – and wouldn’t appear in places where designers really ought to know better. I’ve picked on a few examples here that I just happen to have encountered recently.

I’m asked from time to time who I think is doing really good design and usability and to be honest I struggle a bit. That’s because when things work well you don’t notice. But when there’s an issue, that’s what snags your attention.

Random stuff

Here’s a screen grab from a PR agency called Amendola Communications.

It’s a bit ironic I think that an agency that prides itself on good communications should be putting up paragraphs of centred text. Paragraphs should be left- (or right- depending on language) justified as when we’re reading it’s easier to locate the start of the next line. When text is centred we have to hunt around for the next line and it’s harder to read.

Also, the white arrow in the image above the text is moving up and down all the time. It’s distracting and interferes with reading. Best practice is not to use an animation like that or at least have a control to make it stop.


For my sins I am an Arsenal supporter. I recently renewed my season ticket. Whilst I’m a fan of the team I’m not a fan of the site and always expect usability problems. After I had entered and submitted my credit card details I received the following screen.

Here I’m being prompted to check that I’ve entered my card details correctly but there’s no way for me to do that. The card details are not displayed on-screen and there’s no link to them. I had to crash out of the process and start again. By the nature of the site it has a captive audience with no alternative online purchase method. If this were a commercial site with competitors they’d be losing money. It goes without saying that the relevant fields should be re-displayed to allow me to check them.


Here is a fairly typical presentation of a list of credit cards to choose from when paying. The last thing you want is for a customer to experience a problem when they are trying to give you money so every little detail counts.

Choose your payment card…

The problem with this list is that I feel like I have to hunt and peck to find my card. If I have a Mastercard Credit I’ll naturally pause at the first entry of Mastercard Debit, decide that’s not it, and scan the rest of the list. There are three ‘Visa’ entries separated out. Note also the inconsistent capitalisation of ‘Mastercard’ v ‘MasterCard’.

Similar items should be placed next to each other to allow the customer to check between them. ‘Visa’ should be ‘Visa credit’ unless there really are multiple options (which would be better being split out explicitly if so). So the list should be more like this

  • Mastercard debit
  • Mastercard credit
  • Visa debit
  • Visa Electron
  • Visa credit
  • Maestro UK
  • Maestro International
  • Solo

‘Debit’ and ‘credit’ are not proper nouns and so are not capitalised.

This approach makes it easier to chunk my task into 1) finding the right category of card 2) finding the right specific card within the category.

If you are going to be vague and specify ‘debit’ but not ‘credit’ as in

  • Visa
  • Visa debit

…then the Visa debit should be listed first. People with a Visa debit card will be looking for ‘Visa’ and choose the first item – many won’t see the next entry especially if it’s part of a longer list.

Volkswagen

Sigh. I think car sites in general have a way to go in terms of usability. The emphasis appears to be on making it all look nice but you can’t find what you want. I went to the VW site to look at Polos. In trying to get to some detail I’m offered a choice between ‘Read more’ (button) or ‘Explore the features’ (text link). There is no guidance on what is behind either of these and I’ve not got a clue.

What’s the difference?

Most unhelpfully it turns out that ‘Read more’ is just a short list of awards which would have been better served with an ‘Awards’ link. Going to ‘Explore the features’ takes me to a page with a list of random articles but not a way to explore the features. I genuinely don’t understand why a site like this can be so user-hostile.

Giving up on that path I go to ‘Configure’ in the main navigation and get the following…

Choose your Polo to choose your Polo

In order to configure my Polo I have to choose which Polo I want first! How do I know? I can’t explore the features and I want a Polo with a certain size of engine. How do I know which of these models has the engine I want?

Persevering I choose a model and get to a configurator of a type that I’d expected to get to much sooner. Here’s part of it.

Confusion abounds

An exclamation mark apparently indicates some sort of conflict of choices. The ‘i’ provides more information – or it’s supposed to. There are many items in this list that don’t have an ‘i’ but I really would like to know what ‘Driving Profile Select’ means. And the ‘i’ against the Black Style Pack (why does it all have to be capitalised?) just gives me exactly the same list that’s on the page already.

At this point I give up. I simply can’t use this site for anything other than some surface information about the models available.

British Airways

I was looking at flights to Shannon in Ireland. When I start at ba.com I see this.

I don’t have an issue with the popup – but look at the page behind. It’s an old version of the homepage. When I click continue I get a completely different presentation.

It’s not a big deal (I think) but it can lead to a momentary pause because of the disconnect.

On searching for flights from London to Shannon I get the following popup.

The implication is that if you don’t want to go from or to terminal 2 then you can choose different flights – except you can’t. The only flights you can book here from London to Shannon are on Aer Lingus (now part of the same airline group as BA) and they fly from T2. There are no other choices. I suspect some customers may spend a while hunting around for other choices that they won’t find.

Typically when you’ve chosen your flights the first thing you want to see is the total price. Most travel searches don’t result in an immediate booking as people compare sites, airlines, routes, dates etc. A travel site needs to accommodate both the search and the booking functions. Here’s what I get as a quote page.

There is a replay of my choices at the top which I immediately see which is good as I can check I haven’t made a mistake but the total price is a scroll way down the page. I missed it altogether to start with as it didn’t stand out. I’m also asked near the top if I want to use some Avios (points) to cut the cost but I don’t know what the cost is yet. It seems odd positioning and means I have to scroll the page up and down to compare my options.

Given that I was researching I then wanted to see the cost of flights to Dublin. At the top right of the page there’s a link to ‘change flights’ but this only allows you to select different flights for your chosen route. There doesn’t appear to be a simple way of just changing the route. In fact to get out of the whole process the only thing you can do if you happen to know it is to click on the BA logo on the top left of the page, to which there are no cues. I suspect many people will be opening a new browser tab – or just giving up.

And finally, in confusion…

Qual and quant research are the underpinnings of effective digital design. Things have to look good but a good looking site that customers can’t use to do what they want is merely a sink hole for cash. The research must be done to ensure that what’s going to go live stands the best chance of success and once it’s live the whole thing needs to be monitored to find out what can still be improved.

Having said that there are some things that we know that we don’t need to waste time and money researching. These could be generic things based on human psychology like the fact that movement distracts or it could be something fundamental to an industry like most travel searches are to make comparisons and to get prices.

So why do we still see so many of the same mistakes being made? There are a number of reasons including but not limited to

  • the need to persuade stakeholders
  • the designers working on a project are new and themselves need persuading
  • no good record or access to previous research
  • loss of expertise within organisations (= loss of organisational memory)
  • lack of time or money to refine designs

Jared Spool has written about genius design. It’s when the team becomes so expert in a field that they can quickly knock out effective design with less (not none) emphasis on research because they already know many of the answers. That takes a strong commitment to the longer term, building up that expertise, and embedding that process in the organisation. Here’s hoping.

It’s not the analysis, it’s asking the right question that counts.

There are many people who can ask questions and get answers by analysing a set of numbers or by interviewing the users of a product. There are fewer people who know what the right questions are that will deliver a deeper level of actionable insight.

Identify the right questions

I recently read a book called ‘Everybody lies’ by Seth Stephens-Davidowitz. Most of it is a fascinating insight into what lessons can be learned from a really intelligent analysis of Google searches. The contention is that whilst people modify their answers in interviews and on questionnaires to put themselves in a good light they tell the truth in their Google searches. Here’s a quote:-

Early in the primaries, Nate Silver famously claimed that there was virtually no chance that Trump would win. As the primaries progressed and it became increasingly clear that Trump had widespread support, Silver decided to look at the data to see if he could understand what was going on. How could Trump possibly be doing so well? Silver noticed that the areas where Trump performed best made for an odd map. Trump performed well in parts of the Northeast and industrial Midwest, as well as the South. He performed notably worse out West. Silver looked for variables to try to explain this map. Was it unemployment? Was it religion? Was it gun ownership? Was it rates of immigration? Was it opposition to Obama? Silver found that the single factor that best correlated with Donald Trump’s support in the Republican primaries was that measure I had discovered four years earlier. Areas that supported Trump in the largest numbers were those that made the most Google searches for “nigger.”

If someone is racist, likes extreme porn, wants to find out how to make a bomb etc they often won’t be keen to immediately disclose these things to the first researcher who asks them – but they will be honest in their Google search and that dataset can be mined. By looking at searches on ‘where to vote’ or ‘how to vote’ a more accurate prediction was made of voter turnout in specific geographies. In areas in the US where abortion has become harder to access there is a spike in searches for how to do your own abortion.

Here’s a Guardian article on the book

One thing that’s clear from the book is that the insights didn’t just leap out from the massive dataset. Asking the right questions was critical.

In business I’ve met many excellent competent data analysts who can churn out impressive charts at a great rate. Many of those charts have been completely useless in business terms. If you find someone who understands the business well enough to be able to churn out meaningful in-depth insights then you need to keep them.

Whilst A/B tests are often valuable they are by definition binary. Does the blue or the green button work best? What if a red one would be better? Multivariate testing lets you try multiple variables at the same time but then you need more time and volume to reach statistical significance. So knowing enough to ask the right question in the first place based on previous knowledge and experience can make a difference.

One of the difficulties I find with A/B tests is that everything except the variables being tested tend to be averaged in the analysis. Maybe the test shows that the blue button works best, but hidden in the data is the fact that blue works best with frequent users and green works best with infrequent users. I recall a test done many years ago by a bank who found (once they looked) that the effective colour was different in the morning from the evening. Again, these things won’t just jump out from the data – someone has to think to ask the question.

This doesn’t just apply to data. I’ve written a previous blog post on common errors in survey design. I recall one time as well when we were interviewing customers in Germany on the design of the flight selling system on ba.com. One interviewee was indicating that he thought that design A was better for a particular page than design B. My colleagues seemed to take this at face value but I wasn’t convinced – there was something that wasn’t right. In my view it was obvious that B was better and I thought that we just weren’t asking the right questions. At the end of the interview I went in to follow up. Did he prefer A or B? A was the answer. Which was better? A was the answer. Which one should we implement? A was the answer. Which one was easier to use. B! It turned out that B was easier to use but he liked the look of A. So the action resulting that we nearly didn’t get was to maintain the usability of B and combine it with the visual appeal of A. It may seem obvious in retrospect but it was a good lesson in being clear about the difference between someone ‘liking’ a design (which really doesn’t mean much), or preferring the colours, or finding one easier to use etc. What question is it that you actually want an answer to?

There is currently a debate raging on whether screen time – and how much of it – has adverse effects on children. More data is being brought to bear over the many opinions that are freely available, yet there is still no consensus. The Oxford Internet Institute has recently released a study that found that screen time had little impact on ‘teen well-being’. And the World Health Organisation came out with a report that children under two should have no sedentary screen time at all. As The Verge points out though …the guidelines are less about the risks of screen time itself, and more about the advantages of spending time doing pretty much anything else.

In a recent edition of the BBCs Tech Tent podcast the most intelligent comment I heard from a guest on the show in relation to this issue is ‘I’m not sure we’ve asked the right questions yet.’

Minimise errors in a selectable list by placing similar choices together

When paying by card for an online purchase some sites recognise the type of card from the number. Others ask you what type of card you are using. Here’s a typical dropdown menu asking the customer to specify the type of card.

Dropdown list of credit/debit cards

There’s no apparent order to the list of items – it seems that someone doesn’t really think it matters. However, for those who have a Mastercard credit card there are two issues.

Firstly, since users typically have the attention span of a gnat some people are likely to just register the word ‘Mastercard’ on the first line and select it, resulting in an error on submission.

Secondly, whilst debit cards have the word ‘debit’, credit cards do not. This potentially leads to some ambiguity as to whether ‘Mastercard’ on its own refers to the credit card. It’s left to the customer to make that assumption. Note also that for the debit card it’s spelled ‘Mastercard’ and for the alternative it’s MasterCard. That’s just sloppy and will again cause some people to wonder if it’s significant.

In such a list the debit and credit version should be consecutive e.g.

  • Mastercard debit
  • Mastercard credit
  • Visa debit
  • Visa credit
  • Visa electron
  • Solo
  • etc….

Doing it this way makes it quite clear what each card is and also maximises the chance that the customer will see that there is more than one option for some card types.

This principle applies to any list that users need to choose from and is relevant to surveys. I once filled in a survey about how I had paid for my car. One of the early options was a complex description of a contract which looked like the right choice. I nearly selected it and many would have, but I scanned through the rest of the list and found another very similar description further down that was actually the correct one. Had the two options been consecutive in the list there would be more chance that users would spot that the first likely option they come across is not necessarily the right one. It would also have helped if the two options had been worded to highlight the difference.

The bottom line is that when presenting users with choices in a list

  • make the choices explicit
  • word each option to highlight differences
  • put options that could be confused with each other close together

The pros and cons of structured job interviews and competency-based questions

The interview

About ‘competency-based’ and ‘structure’

I’ve recruited a lot of people – mostly but not exclusively UX/UI designers and researchers. Over time, unsurprisingly, I’ve evolved my approach and whilst I didn’t start with them did adopt competency-based questions as the mainstay of my interviews.

Let’s make sure we’re all on the same page – when I talk about ‘competency-based’ I mean questions along the lines of ‘Can you think of a time when…’, and you ask when the candidate dealt with an uncooperative colleague, or did their best work etc.

The rationale behind such questions is that evidence of past behaviour is the best predictor of future behaviour, and that asking people about what they have actually done and getting them to be specific about it is better than asking what someone hypothetically might do in a given situation. It’s a good rationale.

Since I’ve been job hunting and have now been on the receiving end of such questions I’ve gained a more rounded view of the pros and cons of this approach. What it boils down to is this – interviewers can be more focused on following interview protocol than getting the best information from the candidate. It’s also tied in with having a structured interview. HR advisers quite rightly point out that if you want to be able to properly compare candidates then you need to be consistent in your approach and the questions you ask. It’s a scientific experiment.

When I first recruited people I had a chat and decided on not very clear criteria whether I thought they’d be any good. I actually got good feedback from my boss on the quality of the people that I took on so it wasn’t a disaster – and I’m still in touch with most of those people.

When I was subsequently presented with a structured discussion guide where I had to score the candidate on each section I was sceptical at first. However I did very quickly find it to be useful, especially where I was interviewing a number of people. As an aside I’ll point out that I would always interview with a colleague. This co-interviewer would usually be one of my team – it gave them the experience and development which they enjoyed and helped to validate my own impressions of the candidate. Having done the scoring on each candidate I was surprised how much it helped to clarify our thoughts.

Evolution

The thing is it’s not possible to control for all the variables of an interview in a way that may be possible in a scientific experiment. People react differently to questions and to context. I started off asking all the questions in the guide as they were written, and in that order. I quickly realised that caused some problems. The questions that were confusing or irrelevant could be re-written or dropped, which was done. But sometimes in answering a question a candidate would end up covering some or all of a later question without knowing it, so when it came to that subsequent question it wouldn’t make much sense just to ask it straight. So we might say something like ‘Apart from the thing you just told us about, what else did you do about x?’

The other thing was that people get nervous in interviews. When they do they get tunnel vision and their thinking closes down. We did what we could to try to make them feel comfortable but that only works so far. So sometimes we’d ask ‘Tell us about a time when…’ and the candidate would answer a completely different question, or rabbit on for ages getting mired in detail. The strict version of the protocol says let them talk, say thanks and move on to the next question. That’s what I did at first but very quickly realised that it wasn’t helping me or the candidate.

I started to give some nudges and sometimes just stopped the candidate outright and tried to re-focus them. I was interested in an answer to the question, not how well they coped with an artificial interview. I would prompt the candidate to ‘tell me a bit more about that’, or ‘that’s not quite what I’m getting at, is there a different example you can think of’.

On the receiving end

It’s been quite instructive being on the receiving end of this process. There’s a risk that the application process (see my post on that Internal and recruitment applications need as much UX as ecommerce) and interview style become major filters in their own right, rather than the candidate’s ability to do the job.

I’ve found some of the competency-based questions quite hard. To suddenly come up with an example of experience from a long work history that meets my (and the interviewer’s) understanding of what is actually being asked has been challenging at times. On one occasion I couldn’t really understand what the difference was between the questions I was being asked, and there was no guidance. So it’s not surprising that I waffled and the interviewers didn’t really get to hear what I was capable of.

I have also been asked to ‘Think of a time when…’, when the straight answer is ‘I can’t, because it never happened’. What I do is try to be honest about it but think of an analagous situation that might still score me a point or two. When I was interviewing people and that happened I would then fall back on ‘Ok, I understand that’s a situation you’ve never been in, so what do you think you would do if it occurred?’ At least then I find out if the candidate understood the context and appropriate actions rather than just giving him or her a low score and moving on.

Sometimes I’ll think of the ‘right’ answer to a question just as I walk out of the interview room. You could argue that the ability to think on your feet is an essential attribute of the job and it would be a fair point, but the context and nature of what you’re being asked about is different.

Introverts are typically reflective. If you’re having a meeting at work you’ll get the best input from the introverts if you let them know what you want from them in advance. Otherwise they’ll let you know after (or, often, not) that they’ve thought of something they should have said in the meeting. Interviews are no different. The format of ‘give me an answer now to an important question’ discriminates against introverts. I’m an introvert.

Top tips for interviewers

So here’s where I’ve got to in my thinking around job interviews.

  • Firstly and most important, as an interviewer never lose sight of why you’re there. You are trying to find the best candidate for the job, not the person who is best at interviews.
  • Use an interview guide as a guide not a script. The topics covered are what you need to find out about, and your job is to ask questions, prompt and direct the candidate so that you do find out. Don’t just stand back while someone digs their own grave.
  • Consider letting the candidate know in advance what the questions are – or at least the subject areas that are going to be covered.
  • Overall be flexible, subtle and nuanced in your questions. Try to understand the person in front of you rather than blindly following process.