Using first click testing to validate website designs

Blog

What’s the problem? 

The Sutton website team has been testing concepts for updates to the council website. The challenge is deciding which design is the most effective to take into a beta testing phase.

One of the methods we’re using to validate effectiveness is a first-click test. This is an activity that captures where on a web page a user first clicks or taps when completing a task. It’s a useful method to understand if users find a website clear and navigable.

Why is click testing important?

Research shows that when users follow the right path on the first click they achieve task success around 87% of the time. This reduces to 46% success if the first click leads down the wrong path. 

These statistics echo the feedback we’re getting about the current website design. We’re hearing that the current design is unclear and often leads users to the wrong areas. The result is a frustrating experience as users are unable to complete their tasks. With this in mind, new designs for the Sutton website need to be thoroughly click tested to ensure users can successfully find what they need.

How do we conduct a click test?

The image below shows two variations of a homepage for the website. Each has a slightly different set of elements and navigation structure. We want to know which variant users find clearest to navigate.

Side by side comparison of two variations of the homepage
Side by side comparison of two variations of the homepage

To test usability we set users a simple instruction. Indicate where on the interface they would go to renew a parking permit. The image below shows the responses received as a heatmap.

A click heatmap showing where users would go to renew a parking permit
A click heatmap showing where users would go to renew a parking permit

Once we’ve received a significant response, the next step is to analyse the data. 

Here we’re looking at three things to confirm whether a design is clear to navigate:

  • Distribution
  • Time
  • Confidence

Distribution

Did users click or tap in places that might lead to a successful task completion? In the parking permit heatmap above, we see that Variant A generated clicks in areas unrelated to the task. Variant B, however, received a more accurate distribution. An irregular click scatter might signal that a design is causing uncertainty. In this case, the labeling and structure should be reconsidered.

Time

How quickly did users decide on where to click or tap? In the heatmap example above, the average click time for Variant A was 10 seconds. For Variant B it was 7 seconds. A longer decision time may indicate the design of the page could be simplified.

Confidence

How confident did users feel that their click or tap would lead to task completion? To determine this we asked users to place how they felt on a confidence scale. In the parking permit example, users tended to feel more confident in their decisions using Variant B. Lower levels of confidence suggest the labeling or design of the interface is unclear.

Going forward

In this example we are confident that variant B generally outperformed Variant A for usability. This does not mean the job is finished. We now need to test how our shortlist of design variants perform when users complete other tasks, for example where would you go to pay a council tax bill. Depending on what we discover here, we may decide to redesign any areas of concern. In this case, we’d repeat the first-click process outlined above, but this time we’d focus more attention to the problem areas.

When we’re finished, we’ll use the evidence to help us decide which website design to focus on in beta.

Luke Piper

Luke Piper

User Researcher



Weeknote #2 (16 July 2021)

Weeknotes

Here’s what we’ve got for you this week:


Every little automation helps councillor enquiries

Chris, Digital Innovations Lead
Trish, Business Partner

We’re working with the Customer Care teams across Sutton and Kingston to help refine and automate their existing councillor enquiry process.  We know there’s software on the market that would help but we wanted to deliver some improvements while we evaluate products, run procurement and implement a solution.

Currently, the process is very manual.  Enquiries from councillors and members of parliament (MPs) are received by email and manually entered into a spreadsheet tracker, assigned to an officer and sent onwards.  Confirmation emails are sent manually, as are chasers and updates.

Our work will introduce a Google Form to create a consistent structure for all enquiries. We expect this will help us collect the information we need and reduce email back and forths to clarify or get more information.  The form submission is automatically saved to a Google Sheet and extra fields are added depending on context.

Councillors need us to confirm receipt of their enquiry and set a deadline for our response.  Our work will automate this process to make the confirmation more timely for councillors, automatically calculate the deadline, and save the Customer Care team from sending this email manually.  The confirmation email will include an automatically assigned reference number to help everyone involved refer to the enquiry.

The Customer Care team can pick an officer to assign from within the sheet using custom menu and interface items we’ve created.  The sheet will email the officer to let them know they’ve been assigned an enquiry and include data from the relevant fields.  

An example of the custom assignment interface we’ve built.
An example of the custom assignment interface we’ve built.

Each day a script updates the status of enquiries, checks deadline dates and emails a reminder to officers, and updates councillors if the deadline is breached. Once a week, a round up email is sent to assistant directors detailing their directorate or team’s performance.

Our next step is to pilot the automated form and sheet with a small group of willing volunteers.  From the response we’ve had already we think even in this initial iteration it will start to add value and increase efficiency.


We want to find the right answer, not have the right answer

David, Service Designer

The waste delivery team is wrestling with ideas around improvements to our services and how we can make sure we make the right improvements. It can be tempting to think that we know what improvements to make but how do we know that we know?

We found Erika Hall’s 2018 talk “Design Research Done Right” very interesting. Erika talks about how much we (all) like to be “right” and how that bias can impact our design research negatively.

In design research we all want to find the right answer but we need to remember that we (should) want to find the right answer, not have the right answer. Most of our professional incentives reward having the right answer and this one reason why design research is tricky.  If we aren’t asking the right question, we may get an answer that gets as an immediate “reward” (praise, promotion, approval for a budget or project request, momentum on a stuck team) but at some larger future cost.

As Erika puts it, “answers have a very short shelf-life”. The world changes and if our answers don’t change with it, they become wrong.Too often those of us in design & technology treat “research” as a stage of our work. We did the research, we got the answer – and that answer turns into an assumption over time. The problem with that is that relying too much on our assumptions introduces risk to our work. Anyone who has worked on a few projects will be able to think of “facts” they got from research that didn’t quite stand up to the real world.  

Erika encourages us to live in the uncertainty of not knowing, to allow ourselves to be uncomfortable with this and to keep asking questions. 

There is so much bad design in the world because people were more interested in defending their answer throughout the process than really asking questions:

  • Is this something people need?
  • Do we have the resources to do it right?
  • Is designing something like this and solving this need going to help us achieve our goals?

Design and User Research leads to evidence based decisions and helps us overcome some of our many cognitive biases. 

There are many great insights and ideas in the talk and if any of these ideas appeal to you:  

  • the need to incentive teams who deliver, not individuals who have answers; 
  • exploring the difference between collaboration and consensus and the need to embrace conflict; 
  • how to influence decision makers when we know data does not change minds; 
  • why even great design teams produce bad design (see Apple & iTunes)!; 
  • the value of a good question to help you make a better bet about user behaviour

…then it’s a good use of 45 mins of your time.

One takeaway is that the adoption of a goal driven and skeptical mindset is a great starting point for design success. 

This rings true to us.


Things we’ve read lately

Pamela’s been reading about Using persona profiles to test accessibility. This is a really interesting concept that you build a profile on a google chromebook and you can then test designs through the lens of a user with accessibility needs. This will then give you insight in your prototype design phase, to ensure it can work for all users.

David has been reading about the challenges of building complexity on a low-code platform as part of his work to rebuild some of the councils’ waste service transactions.

John Paul has been reading how GOV.UK is approaching accounts and what that might mean for service delivery.  This is part of a piece of work we’re doing to consider how accounts are used on council websites.