In my previous post, I provided an overview of how UX testing can help identify bottlenecks with your customer journeys. Today, we’ll explore a couple of UX tests and reports I’ve used over the years, with a brief explanation when to consider each of them. To start, reference the matrix chart below to determine what kind of data you want to collect:
Although some of the tests seem to overlap, they actually answer different questions. For example, A/B tests may answer questions on which creative asset (headline copy, font style, or photography) is most effective in generating action, while a tree test may guide the content organization to optimize the customer journey.
Not all tests are actual tests; some passively collect data in the background, such as clickstream and heatmap analysis, where you’ll receive a report based on your website traffic. Of course, you can always configure the test to collect data from a group of participants you’ve recruited.
UX Tests and Analysis
Depending on how you set up the test, the results can be generative or evaluative. This simply means whether you are collecting data to guide the development of content/design elements or validating them after the fact, respectively.
I’d suggest approaching UX testing from broad to specific—meaning, starting with quantitative tests using a large population sample to identify the most frequent issues, and then clarifying those issues through qualitative tests to understand why they’re occurring.
Below is a list of some of the most common types of UX tests:
Usability Testing
This test evaluates how people interact with your existing website or a prototype your team is developing. Participants are presented with a series of tasks and are asked to narrate their thought process to provide an idea on how they organize content, perceive menus, and make decisions on a page.
Because this test is comprehensive, I only recruit up to 6 participants, and then run additional tests as we iterate the prototypes. It’s important to note that these participants must belong to the same audience type: prospective students, alumni, staff, or current students. Mixing these audiences will provide inconsistent results.
Eye-tracking Study
A specialized camera or headset is used on the participant to record their gaze on your web page. The study tracks how participants navigate your interface and measures how long it stays in an area, and other metrics to help you refine your page layout and stylistic elements. Because this test requires additional hardware, calibration for each participant, these tests can be very time-consuming.
Eye-tracking studies are often used in academic research or by specialized design agencies/departments that focus on pushing the envelope on user interface design. Instead of eye-tracking studies, I typically recommend a combination of usability tests, heatmaps, and A/B tests for more substantial results, and your team can iterate the optimizations much faster.
A/B Test
In this test, you present the original web page to half of your visitors and a variant page to the other half. The variant page may have a different headline, photography, call to action, or a combination thereof. This is a great way to test landing pages or a key page where your creative team may have several different design options. A/B test measures the quality of the visits on several metrics, including page duration, bounces, clicks, and more. You can also define your audience segment based on several dimensions, such as demographic information, and only include them in your A/B test.
First Click Test – First impressions are important, and providing the right pathways to your visitor’s goals is equally important. You want to make sure the first link they click—menu, call to action, or a link—leads to the right destination. In this test, participants your with a task that may be several steps removed, but their first click gives you a glimpse of their decision-making process. First click tests are also useful for teams with limited resources or if your website is relatively small, and a usability test or a tree test may be overkill.
Tree Test
Participants are presented with a menu interface similar to a file directory (Windows Explorer or Finder) and are asked which “folder” would most likely complete a task. Of course, the “folders” are a representation of your web pages. The tree test is sometimes referred to as a “closed card sort” and it can be used to validate your open card sort results.
The final report shows which page or path is visited the most for a particular task, similar to a clickstream analysis. Depending on your test provider, the report may show visualizations for the incorrect paths taken, backtracking, and highlight issues with your architecture or page titles.
Clickstream Analysis
To see how visitors flow from page to page on your website, this report shows a visualization of the most common paths visitors take to reach your content and which pages lead to website abandonment. Google Analytics (through Behavior Flow), Adobe Analytics, and many other web traffic monitoring tools provide this type of clickstream analysis. You can then identify where visitors are having difficulty through crisscrossing patterns or backtracking in their navigation. In contrast, a clean clickstream should align with your customer journey(s) and end with your visitors arriving on your goal pages (enroll, buy, register, etc.).
Heatmap Study
This study provides a page-level analysis of where visitors scrolled, moused-over, or clicked on the page. While tree tests and clickstream analysis show a bird’s eye view of your visitors navigating your website architecture, a heatmap provides detail on how visitors engage with your web page.
If you have a large website with thousands of visitors a day, you should consider segmenting your audiences or selecting key pages in the customer journey. Otherwise, you’re assuming all your audiences have the same interests and journeys.
Focus Group
This “group interview” is an informal discussion to gauge participant’s opinions about a certain design, brand messaging, or some aspect of your creative work. I create a discussion guide ahead of time and use them on different audience groups (students, faculty, alumni, etc.). I typically request between 5 and 9 participants per group but err on the lower end if I’m speaking to students, simply to avoid conformity bias. Unlike interviews, focus group discussions shouldn’t dive too deeply into a topic.
Interviews
Although similar to a focus group, interviews are more comprehensive to help clarify issues discovered in your data research and surveys. If you’re interviewing a group, I suggest limiting to fewer than 11 participants preferably of the same audience type (alumni, current students, or administrators). I prefer limiting to five participants to keep the discussion manageable and add more groups as needed.
Card Sorting
To understand how visitors navigate your menus, participants evaluate the clarity of your web page titles and organization. In the analog days, this test was created using flashcards and participants simply sorted the cards into categories, hence its name. But today, we have digital tools and online services to run these tests more efficiently, even if these digital tests may not actually display “cards.”
There are two types of card sorting: open and closed. With an open card sort, you ask participants to organize a list of items. The participants cluster each item into categories that are most intuitive for them. For example, if you present a list of fruits, the participants may organize them based on size, color, shape, or many other categories. Understanding which categorization makes the most sense to them will guide your development of a more intuitive menu.
As the name implies, a closed card sort is the reverse of an open card sort, where the categories are pre-defined and the participants are asked to place the items under each category. Closed card sort is often used to validate the results of an open card sort, and tree testing is a form of an interactive closed card sort. While both types of tests are evaluative, open card sort can also be generative if you’re creating a new information architecture.
Email Surveys
Prior to on-site interviews, email surveys can help prioritize the issues you’ve found in your data analysis. I use Likert scale questions to measure responses and limit open-ended questions. The results should help refine the questions for your interviews or focus group sessions. I typically develop audience-specific questions for the IT team, admissions, marketing, student groups, and more.
Intercept Surveys
This survey pops up on the user’s browser when they visit a specific page(s) or after scrolling to a certain on the page. This survey is a great way to generate feedback on the existing website if the student or faculty groups are unable to respond to your email surveys. The survey typically asks the visitor to self-identify (student, faculty, etc.), and then to rate their experience of the website on several factors. The surveys should be short enough to complete in less than three minutes.
Resources & Tools
Here’s a quick list of some of the resources and tools I recommend for website and landing page evaluation.
For questions, comments, or suggestions, find me on Twitter @muzel_dh.