UX explained: Research

UX researchers adopt various methods to uncover problems and design opportunities. Here we look at the two methods to gather requirements and insights to enhance aid design processes.

When thinking about research, many people think primarily of usability testing.  While testing is an integral part of user research, many other methods are essential in determining product viability and whether we’re creating the right product. I use a wide range of user research methods.

The first phase of work on any new project is usually an intensive research period. I conduct ethnographic interviews, review competitors, and sometimes conduct a round of benchmark testing to validate the information gathered during the initial briefing.

All user research has in common, is that it places users at the centre of the process.

Quantitative research

Quantitative research methods are research methods dealing with numbers and anything measurable in a systematic way. Surveys and A/B tests are common/easy quantitative research methods.

Quantitative research aims to measure user behaviour in a ‘quantified’ way and statistical analysis. For example, if you created two versions of your webform and split your traffic, 100 users in Europe got one version, and another 100 within the same region got the other. You’d be able to measure which form lead to higher conversion rates. Ending with confirmation or contradiction of the hypothesis tested in the design.

However, you must only test against one or a few variables so that data collected can be attributed to that specific variable.

I’ve used this method in the following case studies:

Qualitative research

On the other hand, qualitative research is often used to gain an in-depth understanding of the experiences of individual users or user groups. Best used for exploring a question or scoping out a problem and generally used to answer pre-defined questions in the advanced stages of a research study or get products off the ground.

The only way to achieve an understanding of the people who will use your design is to interview them. I often place this type of research at the beginning of a project to ensure that the project’s overall direction is relevant to potential customers and users.

I’ve used this method in the following case studies:

BONUS: User Testing and User Testing Plans

Failing to plan is planning to fail – there’s a common misconception that running into the closest coffee shop will count as user testing and validate your idea.

Sure, any time spent with users can prove invaluable, and an average person may be able to raise simple usability issues, but will they understand the terminology?

Has a fashion designer on her coffee break ever drawn down her pension on a tablet before? More than likely, no.

A user testing plan is invaluable, and probably the only way you will get budget approval for long-term testing and platform optimisation.

The most common user testing plan contains what you are going to do, how you will conduct the test, what metrics you will capture, number of participants you are going to test, and what scenarios you will use.

Once everyone has commented, and the final plan is agreed upon, you can get going and create a report to reflect each test’s outcomes.

Elements of a Test Plan
  • Scope: Indicate what you are testing, e.g. the navigation; a particular flow or a section of the site to assess conversion
  • Purpose: Identify the concerns, questions, and goals for this test. e.g. Can users navigate to product pages from the check-out basket?”
  • Schedule and Location: Indicate when and where you will do the test. Before each sprint? Remote testing or Lab testing.
  • Session description: Will they be assisted? Will the session be timed? If so, how long? Equipment: Indicate the type of equipment you will be using in the test; desktop, laptop, mobile/Smartphone, monitor size and resolution, operating system, browser etc. Indicate any accessibility tools also required, such as screen readers and audio descriptors.
  • Participants: Indicate the number and demographic of participants to be tested.
  • Scenarios: Indicate the number and types of tasks included in testing.
  • Metrics:
    • Subjective metrics: Include the questions you are going to ask the participants before the sessions
    • Quantitative metrics: Indicate the quantitative data you will measure in your test (e.g., successful completion rates, error rates, time on task).
  • Roles: Include a list of the staff who will participate in the usability testing and what role each will play.

Got a project?
Get in touch

Request a callback
Monday to Friday
10:00 to 19:00 GMT
Book here

Based in London, UK.

    This website uses cookies. By continuing to use this site, you accept our use of cookies.