Two dimensions of customer research methods for the uninitiated

Are you confused?

A lot of words get bandied about when talking of “finding out” what customers do and want. Generally this “finding out” comes under the heading of “insights” and there are a multitude of methods for getting insights. I suspect that there are many people who aren’t at all clear about these different methods even if they use the outputs in their jobs. Maybe their work brings them into contact with analysts and researchers who talk fast and don’t explain much but expect their colleague to immediately latch on and understand why and what they are doing, and what the results mean.

That’s a bit of a generalisation I know, but I have found that even people who work quite closely with insight providers can still find it difficult to understand why one method is picked over another to answer a given question.

If you share some of that confusion, then I’ll try to help.

It’s important to be clear from the outset that there’s no single insight method that’s better than the rest. It depends on what you want to find out, and sometimes it’ll depend as well on how much money or time you have. Often, the best insights are gained from a combination of methods.

Qualitative and quantitative

One of the key distinctions in types of insight is the split between qualitative (qual) and quantitative (quant). Quant is to do with numbers and qual is broadly about what people say. Quant methods analyse numbers – typically using a range of statistical methods and may need sufficiently large sample sizes if measures of statistical significance are required. Qual methods most commonly gather words but can also use other media such as photos, videos or sound. Qual research will often use small sample sizes but if sufficient data is collected then it’s possible to generate numbers to which statistical methods can also be applied (and yes I know there are statistical methods for small data sets too).

Site analytics tools such as Google Analytics or Adobe Analytics are quantitative methods. They deal with the analysis of numbers and statements of fact about behaviours – for example how many people who landed on the homepage came from paid search and then went on to make a purchase (or not).

Interview methods such as usability trials and focus groups typically generate qualitative insights as they result in a description of what people think, what issues they had, what they say they like, how they react to concepts etc. If enough interviews are done it’s possible to create some statistically significant analyses – such as whether a particular age group liked a picture on a page more than another age group.

Self-reported and observed

Another key distinction is between self-reported and observed behaviours. Web analytics, for example, are quantitative, and they also observe behaviours and technology use. Analytics allow us to virtually observe what people have done and the technology they used on our site – we can (virtually) see the behaviour from the digital footprints that are left. By contrast, a survey asking respondents to use a rating scale is also a quantitative method, but it is self-reported. If someone rates themselves as a 10 in likelihood to recommend your service that number can be quantitatively analysed along with all the other responses, but it doesn’t mean that we have observed that person making a recommendation. They may or may not actually recommend you when it comes to it.

Observed behaviours are typically more accurate in that they have happened in reality. X number of people did in fact land on the homepage from paid search and went on to convert. When we do usability testing with a prototype there is a mix of methods. We can observe how respondents actually use the interface, and we can see whether they complete the task or not. At the same time we ask them to self-report on the reasons why they are doing what they do, and what other features would be useful. It’s useful to hear what they say although we have to bear in mind that there are a number of factors that cause people to be inaccurate when self-reporting.

This inaccuracy is why why there’s sometimes a big difference between an opinion poll and how people actually vote.  When reporting their own behaviours, thoughts and feelings (in the past, present and future) it’s naturally human to want to please others and some may offer a reply that they think the interviewer will like. They may also not reply truthfully if they feel it’s a touchy subject, such as when declaring how much alcohol they drink or whether they will vote for a controversial candidate.

And so we see that reported methods are only as valid as the truthfulness and accuracy of the report. We can see on video surveillance that someone did in fact commit a crime. When asked in court why they did it the perpetrator may lie, but even if they intend to tell the truth their memory may be inaccurate, or they may not understand their own motivations. Humans aren’t always good at understanding and explaining their own behaviour and we’re all good at post-rationalising.

Even with all these caveats reported behaviour does have the great benefit of being able to explain WHY someone behaved or plans to behave in a certain way. Often, it’s stated that quant tells you WHAT happened and qual gives you the WHY. That’s because the quant is frequently an observed method and the qual is frequently a reported method. But it’s more accurate to say that observed methods tell you what happened and reported methods tell you why.

Categorising insight methods

Taking these two dimensions we can create a 2×2 matrix (that’s so beloved in business circles) and we can position different methods into the quadrants.

No alt text provided for this image

My intent on writing this short article is, as I stated up front, to help the confused get a better understanding of the relative merits and whys and wherefores of different insight methods. This matrix is intended just as an aid in that intent – to give some structure to how to think about it. Depending on exact context and methodology some of the methods might shuffle around a little. And this view isn’t a nailed-down definitive model. Modern neurological methods of insight can blur the lines.

That’s just part of the story

What I’ve described here is a way to think about what different insight methods give you. My intent is that it’s useful for the stakeholder who is working with research professionals and just wants to make sense of what all these things are good for.

The next part of the story which I’ll pick up in my next article is to address the question of how you actually decide which method to use in a given context. That’s where we’ll overlay the constraints of budget, time, and stage in the project lifecycle, to guide us the best method for your circumstance.

If you want help with research or optimising Digital Experience for your customers you can find out more about Daydot, the company I work for at

Four actions to improve digital CX and be ready for the new normal

What are brands thinking?

It’s no secret that the pandemic has significantly increased online transactions whilst also having a huge negative impact on bricks and mortar stores. The BBC here reports on ‘Tesco profits surge as online orders double‘ whilst Aldi, which hasn’t had online ordering so far, is now being driven at least to trial it.

It’s not only supermarkets that have been affected, and with the growth in numbers come people who are less used to online commerce, or who have different needs and perspectives – which is maybe why they weren’t already online.

In their Digital Trends Survey, eConsultancy asked brands about what they were finding in relation to customer journeys. You can see from the chart below that some felt they were in a better position than others to cope.

Whilst organisations are dealing with the changing way that customers want to interact with them – and who those customers are – they are also having to manage and cope internally with their own disrupted workforce. We’ve all seen the (sadly ongoing) news around temporary and permanent layoffs, as well as the huge shift to working from home. Again, some companies will have had better business continuity plans than others and the impact on customers will have been mixed.

It’s pertinent to ask what organisations are focusing on in all this turmoil. Of course, for some it’s simply a question of doing what it takes to survive. But for those with cash reserves or enough business to see them through, are they looking simply to conserve resources until it all gets better, or are they still making a priority of customer service?

There is plenty of research that shows that companies with a customer-focus outperform the stock market – including in bad times. There is also some good advice on how to win customers in an economic downturn. When people are feeling down and their livelihoods are threatened the last thing they need is poor service. When things do start to recover those people who interacted with brands and had good or poor experiences at emotionally stressful times will remember. What brands do now will impact customer attitudes towards them later.

What customer focus means

One of the buzz phrases for some years now has been ‘digital transformation’ which is about large-scale changes in infrastructure and processes to get to a better operating position with digital services. This is typically (and not wrongly) what the higher echelons in companies think of when they ramp up digital. However, the best also understand the value of a good hands-on experience for customers, underpinned by research and implemented by designers of various stripes.

You can have the greatest CRM or personalisation system in the world, with the best bandwidth, security, and redundancy, but it’s all for naught if customers just don’t want to – or can’t – use the service.

I’ll give you some examples.

I was recently looking for some blue-light clip-on filters for my glasses. I want to wear them in the evening to help my brain prepare for sleep before I go to bed. I did what so many of us do these days and started on Amazon and found the usual array of similar products for varying prices, with little supporting information as to the difference between them and why I shouldn’t just buy the cheapest. However, the most expensive ones (for £27 from Klim) included information on what wavelengths of light were blocked, and how much of it was blocked.

In addition there was information about the differences between models of Klim clip-on glasses. Psychologically it’s been shown that people don’t actually have to understand this sort of thing (although I think I do) – just having some scientific-looking information and some product info helps potential buyers to trust the seller more. There’s an emotional component as well as just providing straight facts.

In the Amazon product listing there was a cheaper pair from Cyxus. I’d never heard of either Klim or Cyxus so I checked the company websites. They both looked reputable (again the emotional component) but there was still no technical information on the Cyxus site including what effect the different colours of glasses would have.

I messaged the company through their site and never received a reply.

Needless to say I bought the Klim glasses.

Here’s a different sort of issue.

In April of 2018 I wrote a blog post about a disconnect between Lloyds Bank databases that resulted in a significant customer problem. When lockdown hit I received messages from people who had obviously Googled the problem and found my post. They were asking for help and solutions that they couldn’t get from Lloyds for a problem that had been known for years. Banking problems can be immensely stressful, and even more so in troubled times. How do you think those people are going to think about Lloyds in the future?

And yet another example.

A while ago I was trying to log in to Simply Health – a health insurance provider. I got this message.

The error message told me that I couldn’t log in because of a technical glitch at their end and to try again later. In fact the reason for the error was because they’d changed from using usernames to email addresses for login. They had warned me in advance by email but I’d forgotten and they’d failed to update their error message. This is exactly the sort of thing that I know from surveys and feedback that will cause people immense frustration as they follow the instruction and try again and again over time.

The inability to contact companies for basic information or function is another source of complaint.

During lockdown I ordered something from Currys’ website. I then wanted to cancel it. It isn’t possible to cancel an order online and because Currys had limited phone capacity was a nightmare to get through. Around the same time I ordered a different product from Very and then needed to cancel it. The process on the Very website was simple and easy and the refund arrived quickly.

These experiences form emotional associations in our minds and give us little nudges when we’re thinking about where to buy the next product from, or who to recommend to our friends and neighbours.

4 things that brands can do to help themselves and their customers

We could acknowledge that brands may not have been set up for the pandemic and had record traffic levels to their websites, but too much of what I see has nothing to do with that. It’s about a fundamental attitude and approach to improving the customer experience and, in these cases, doing some Digital Experience Optimisation (DXO). Brands need to live and learn and understand how delving into the world of customer research (attitudes and behaviour) and experimentation can help them on the road to delivering a better digital experience for customers that will reap business rewards in the form of long-term customer loyalty and competitive advantage.

To get started on this journey here are four areas brands should look to prioritise as we move into 2021

  1. Do everything possible to allow customers to self-serve. Divert some of any remaining development funds from new sales features to self-service enablement. Within organisations it’s usually the sales functionality that’s considered the sexy bit on a site. That doesn’t necessarily mean that it’s the best way of spending a pound though. Customer experience isn’t always included in the ROI equations of business cases, but it should be if you’re interested in keeping customers and thinking of lifetime value. If you can provide basic functions online then the overworked customer service staff can focus on the people who really do need to talk to them.
  2. Ensure information on the website is up to date and comprehensive. Which shops are actually open? What times? Ensure that FAQs do really reflect frequently asked questions, rather than being just added sales blurb that someone in an office thought of. Analyse calls and emails and social media to find out what customers want to know, and put it on the site.
  3. Not all issues are necessarily going to be apparent through calls and emails. Through those you’ll find out the issues that people have when they are sufficiently motivated to make contact, but there are likely to be other problems that exist that just cause people to give up and go away, and you’ll never know. For these you need to do some proactive research. Just running a simple site survey can yield a goldmine of information, and methods such as session replay and interviews conducted by experts can help paint a vivid picture of customer pain for internal stakeholders who hold the purse strings.
  4. If you don’t have time or expertise for this work, get some help. This applies to organisations of all sizes, and not only commercial organisations. If you want to provide a successful service to people the principles in this article apply. Even large companies with expert internal teams might find they are maxed out on essential work – where they don’t consider customer experience to be essential. It doesn’t have to be expensive. You might just need some consultancy on identifying opportunities, or could benefit from input from a researcher on structuring a survey so you don’t end up collecting junk data.

In conclusion

In tough times people can become overwhelmed with emotion, and anyone who contributes negatively to those emotions shouldn’t expect to be welcomed back when things get back to normal – whatever ‘normal’ looks like at the end of the tunnel.

I am (I’ve said it so many times) constantly amazed at the blind spots that organisations have in understanding the impact of the customer experience online, and what to do about it. Organisations need to embed this into their culture now rather than waiting for a post-covid world when it may be too late.

Start with the intent to do better, find out what problems your customers have and work out (with external help if needed) how you can optimise your digital customer experience to solve them. I work for an agency called Daydot. If you’d like some help to optimise your digital customer experience, get in touch

How Moleskine won my loyalty from Neo

Neo M1 smartpen

It absolutely amazes me how you can still get companies who don’t understand that happy customers will be better long term for a business than getting a short term sale.

About ten years ago at a conference I bought an innovative product called Papershow. It was a bluetooth-enabled smartpen that allowed you to write on special paper and have the result appear on a whiteboard on your PC, which could then be shared remotely in a presentation. It was a great dynamic addition to a Powerpoint presentation, replicating the experience of having a whiteboard or flipchart in a meeting room with local attendees. The advantage of this over a drawing tablet was that you could look at the paper while drawing. With a tablet you have to write while looking elsewhere – at your computer screen – which can be tricky. Anyway, I liked it.

I last used the pen a couple of years ago and had planned to use it again for a remote presentation I was due to give on a Tuesday. On the Sunday before I got the pen out to check it still worked – and it didn’t. The software was so old it would no longer work with Windows 10, and the company appeared to no longer exist, so no update was available. I should have checked sooner.

After some research I decided that the best (least expensive) direct replacement would be a M1 Neo smartpen. I wasn’t sure at first because if you look at the website it doesn’t actually explain what the product does. Here’s the link

I found that I could order the M1 from Amazon for delivery on Monday. The listing I purchased it from didn’t actually say that it was the M1 but on careful reading and looking at the pictures it was clear that it was. Many product listings on Amazon are appalling, but that’s another matter.

The problem was that the special paper needed to make the pen work wasn’t bundled with the pen and I could only get it delivered on Tuesday, which would be too late. The neopen website references a UK reseller who only has an email contact. I emailed asking if there was anywhere in London where I could go to physically buy the paper on Tuesday. I received a response on Wednesday saying that they only had an online shop.

Some digging in obscure online corners on the Sunday yielded the information that the Moleskine smart pen was made by Neo, and that the Moleskine sold the same ‘n coded’ paper. There was a Moleskine shop near to where I was giving the presentation so I phoned them and a helpful lady told me that she thought the combination would work, and that if I came in on Tuesday I could try it out before buying. I ordered the pen, taking the chance that would be able to get paper for it before the presentation.

On Monday morning I also emailed Neo themselves, again asking how I could quickly get the paper in London, and also asking which Moleskine notebook would work. The reply that I received on Tuesday morning came as an unwelcome surprise. They said that the Moleksine paper was not compatible with the M1, and that I should purchase the paper from their online shop (not the UK reseller) which their website says will take 3-5 days to arrive.

At this point I was out of options. The pen had arrived and synced with the computer, and I’d downloaded the app. I headed into London, destination Moleskine. The lady in the shop remembered me from my call and couldn’t have been more helpful in setting up my laptop to try out the paper. It worked flawlessly. The Neo app even recognised that it was a ‘smart cahier’ notebook which I bought. The irony was that the presentation went ok but I didn’t need to use the pen. Oh well.

Being somewhat piqued that the Neo helpdesk had told me that the M1 wouldn’t work with the Moleskine paper I emailed them back to let them know that it did work, and somewhat tongue in cheek asked for a contact who I could tell that the helpdesk people hadn’t received proper training.

I was then even more surprised and angered to get a response that said the companies have totally different products (clearly not true) and that as an error could occur using Moleskine paper they recommended that I not use it, even though I’d told them it worked fine. Since I have something of a life to live I didn’t bother replying but thought I’d vent some of frustration in this post.

It would have made most sense in their initial response for Neo to acknowledge my urgent need, and to provide advice on the Moleskine paper as the only immediate option. They would have within their rights to point out that they didn’t make it (or do they?) and so couldn’t guarantee it, and they would at least have had a satisfied and loyal customer.

The customer service I got from neopen was in stark contrast to that from Moleskine. Guess who’s going to get my business in future.

How to convert sceptical UX stakeholders to allies

Image from

I have often been asked how I go about persuading sceptical stakeholders (which I’ll abbreviate to SSH) that UX practices including research are the right thing to do – and that they are effective. The alternative is usually touted as either ‘just draw what I’m telling you’ or ‘if you’re so good at this why do you have to take all this time and money to figure out what to do’.

I’ll say right up front that the single most effective method that I’ve found is to get the SSH to attend some user research in person. The SSH will have their own preconceptions of what will and won’t be effective and will often believe they intimately understand how customers think. You can argue and present an alternative view to them based on experience, previous research, numbers, whatever – but ultimately it just comes down to your opinion (sure, with some backup) against theirs. There’s no emotional or visceral connection for them.

Once the SH watches a real customer struggling with an ‘easy and obvious’ interface, or articulating a completely different rationale and way of thinking about what they are doing from the SSH’s assumptions, then that emotional connection is made. Either they choose to accept what they’ve seen and heard or they choose to ignore it. If the latter it’s a different ballgame, but any reasonable person will concede that they have learned something useful and new. I would add also that if for example you are doing 1-1 depth interviews for usability, then the SSH needs to attend at least 3 sessions and preferably more. They need to see that the issues arising are not the whim of a single atypical customer. If they see that a particular issue is raised by even two or three people then the message starts to sink in.

Even so, there can still be some peripheral objections about the methodology or the way the questions were asked, or that the questions didn’t get at the heart of the matter. So there are some things to do to ensure that the viewing experience has the greatest impact. These can be summed up in ‘involve the SSH all the way through’.

Firstly, make sure you understand not only the business objectives of that SSH but also their personal drivers. I’ll take it as granted that you’re balancing business objectives with customer needs in a design, but if you want to take a SSH on the journey with you then you need to know if they are dealing with a similarly sceptical boss they also need to convince, or if they are new to their role and feel they need to prove themselves quickly – or whatever. This understanding will inform your conversations and the supporting material you provide them with.

Even if the SSH has some ideas about design it may be that showing them some options or introducing technical constraints will sow some seeds of doubt about their own invincibility. I’ve found a workshop with a limited number of people from commercial, engineering, design (and whoever is needed – legal, PR etc) can be effective. The idea again is that it’s not just you arguing the toss but a session of domain experts focusing on the issue at hand, working through constraints, enablers and options. At the end of the session there may be outstanding actions for people to go away and find out about – there may aspects of business process, technical possibility or law to be clarified before significant further steps can be taken. In a small organisation this session may just be a few people round a table – in a large organisation it could be a bigger meeting.

It’s important throughout all this to present a humble face. Whilst you may be convinced that a given approach is the right one you need to show that you are listening and considering alternatives – just as you are asking others to do.

When it comes to planning some research then the SSH has to be included in agreeing the objectives, method and conduct of the research. You don’t want them to have that wiggle room afterwards. If the SSH has agreed to all these things and been given ample opportunity to voice any objections or issues then they will be more committed to the process. This doesn’t mean that you have to do everything they ask. You still need to be the expert running the show – the person who knows the right way to do things. So you need to find a way to incorporate their input in an appropriate manner. Sometimes it’s necessary to include a design option that you are convinced won’t work just so that the SSH can see it for themselves and to show that you’re not trying to ‘rig’ the outcomes.

If you look at resources on stakeholder management you’ll find plenty of other techniques that you can use alongside what I’ve described here – and it’s a good idea to do so. Nevertheless if you make sure you are engaging in constructive dialogue, showing that you are listening and exploring options, and involved the SSH all the way through in the planning and execution of the research, then you’ll find it will take you a long way to turning that sceptical person into an engaged ally.

Basic usability issues still plague users

Photo by jcomp –

Call me an old romantic if you want but I would have hoped after all this time of interaction design that some of the more basic usability issues wouldn’t crop up so often – and wouldn’t appear in places where designers really ought to know better. I’ve picked on a few examples here that I just happen to have encountered recently.

I’m asked from time to time who I think is doing really good design and usability and to be honest I struggle a bit. That’s because when things work well you don’t notice. But when there’s an issue, that’s what snags your attention.

Random stuff

Here’s a screen grab from a PR agency called Amendola Communications.

It’s a bit ironic I think that an agency that prides itself on good communications should be putting up paragraphs of centred text. Paragraphs should be left- (or right- depending on language) justified as when we’re reading it’s easier to locate the start of the next line. When text is centred we have to hunt around for the next line and it’s harder to read.

Also, the white arrow in the image above the text is moving up and down all the time. It’s distracting and interferes with reading. Best practice is not to use an animation like that or at least have a control to make it stop.

For my sins I am an Arsenal supporter. I recently renewed my season ticket. Whilst I’m a fan of the team I’m not a fan of the site and always expect usability problems. After I had entered and submitted my credit card details I received the following screen.

Here I’m being prompted to check that I’ve entered my card details correctly but there’s no way for me to do that. The card details are not displayed on-screen and there’s no link to them. I had to crash out of the process and start again. By the nature of the site it has a captive audience with no alternative online purchase method. If this were a commercial site with competitors they’d be losing money. It goes without saying that the relevant fields should be re-displayed to allow me to check them.

Here is a fairly typical presentation of a list of credit cards to choose from when paying. The last thing you want is for a customer to experience a problem when they are trying to give you money so every little detail counts.

Choose your payment card…

The problem with this list is that I feel like I have to hunt and peck to find my card. If I have a Mastercard Credit I’ll naturally pause at the first entry of Mastercard Debit, decide that’s not it, and scan the rest of the list. There are three ‘Visa’ entries separated out. Note also the inconsistent capitalisation of ‘Mastercard’ v ‘MasterCard’.

Similar items should be placed next to each other to allow the customer to check between them. ‘Visa’ should be ‘Visa credit’ unless there really are multiple options (which would be better being split out explicitly if so). So the list should be more like this

  • Mastercard debit
  • Mastercard credit
  • Visa debit
  • Visa Electron
  • Visa credit
  • Maestro UK
  • Maestro International
  • Solo

‘Debit’ and ‘credit’ are not proper nouns and so are not capitalised.

This approach makes it easier to chunk my task into 1) finding the right category of card 2) finding the right specific card within the category.

If you are going to be vague and specify ‘debit’ but not ‘credit’ as in

  • Visa
  • Visa debit

…then the Visa debit should be listed first. People with a Visa debit card will be looking for ‘Visa’ and choose the first item – many won’t see the next entry especially if it’s part of a longer list.


Sigh. I think car sites in general have a way to go in terms of usability. The emphasis appears to be on making it all look nice but you can’t find what you want. I went to the VW site to look at Polos. In trying to get to some detail I’m offered a choice between ‘Read more’ (button) or ‘Explore the features’ (text link). There is no guidance on what is behind either of these and I’ve not got a clue.

What’s the difference?

Most unhelpfully it turns out that ‘Read more’ is just a short list of awards which would have been better served with an ‘Awards’ link. Going to ‘Explore the features’ takes me to a page with a list of random articles but not a way to explore the features. I genuinely don’t understand why a site like this can be so user-hostile.

Giving up on that path I go to ‘Configure’ in the main navigation and get the following…

Choose your Polo to choose your Polo

In order to configure my Polo I have to choose which Polo I want first! How do I know? I can’t explore the features and I want a Polo with a certain size of engine. How do I know which of these models has the engine I want?

Persevering I choose a model and get to a configurator of a type that I’d expected to get to much sooner. Here’s part of it.

Confusion abounds

An exclamation mark apparently indicates some sort of conflict of choices. The ‘i’ provides more information – or it’s supposed to. There are many items in this list that don’t have an ‘i’ but I really would like to know what ‘Driving Profile Select’ means. And the ‘i’ against the Black Style Pack (why does it all have to be capitalised?) just gives me exactly the same list that’s on the page already.

At this point I give up. I simply can’t use this site for anything other than some surface information about the models available.

British Airways

I was looking at flights to Shannon in Ireland. When I start at I see this.

I don’t have an issue with the popup – but look at the page behind. It’s an old version of the homepage. When I click continue I get a completely different presentation.

It’s not a big deal (I think) but it can lead to a momentary pause because of the disconnect.

On searching for flights from London to Shannon I get the following popup.

The implication is that if you don’t want to go from or to terminal 2 then you can choose different flights – except you can’t. The only flights you can book here from London to Shannon are on Aer Lingus (now part of the same airline group as BA) and they fly from T2. There are no other choices. I suspect some customers may spend a while hunting around for other choices that they won’t find.

Typically when you’ve chosen your flights the first thing you want to see is the total price. Most travel searches don’t result in an immediate booking as people compare sites, airlines, routes, dates etc. A travel site needs to accommodate both the search and the booking functions. Here’s what I get as a quote page.

There is a replay of my choices at the top which I immediately see which is good as I can check I haven’t made a mistake but the total price is a scroll way down the page. I missed it altogether to start with as it didn’t stand out. I’m also asked near the top if I want to use some Avios (points) to cut the cost but I don’t know what the cost is yet. It seems odd positioning and means I have to scroll the page up and down to compare my options.

Given that I was researching I then wanted to see the cost of flights to Dublin. At the top right of the page there’s a link to ‘change flights’ but this only allows you to select different flights for your chosen route. There doesn’t appear to be a simple way of just changing the route. In fact to get out of the whole process the only thing you can do if you happen to know it is to click on the BA logo on the top left of the page, to which there are no cues. I suspect many people will be opening a new browser tab – or just giving up.

And finally, in confusion…

Qual and quant research are the underpinnings of effective digital design. Things have to look good but a good looking site that customers can’t use to do what they want is merely a sink hole for cash. The research must be done to ensure that what’s going to go live stands the best chance of success and once it’s live the whole thing needs to be monitored to find out what can still be improved.

Having said that there are some things that we know that we don’t need to waste time and money researching. These could be generic things based on human psychology like the fact that movement distracts or it could be something fundamental to an industry like most travel searches are to make comparisons and to get prices.

So why do we still see so many of the same mistakes being made? There are a number of reasons including but not limited to

  • the need to persuade stakeholders
  • the designers working on a project are new and themselves need persuading
  • no good record or access to previous research
  • loss of expertise within organisations (= loss of organisational memory)
  • lack of time or money to refine designs

Jared Spool has written about genius design. It’s when the team becomes so expert in a field that they can quickly knock out effective design with less (not none) emphasis on research because they already know many of the answers. That takes a strong commitment to the longer term, building up that expertise, and embedding that process in the organisation. Here’s hoping.