Onboarding Part 2 – Case Studies

In part 1, I described how retention issues affect businesses and how UX teams are sometimes asked to fix the problems.

It’s story time! Let’s take a look at a couple of real-life examples.

A Social Platform

The first product facing challenges is a social platform aimed at older adults. Think Facebook in terms of general functionality with additional virtual, live programming. Membership was paid for by a sponsoring organization, such as a health insurance company, and provided to their customers as a free benefit.

Prospective new members were contacted by the sponsor via email or snail mail brochures. This material contained a link to a landing page with more info where the older adult could go on to register as a new member.

The product suffered from low retention rates and there were a bunch of theories as to why, ranging from “it looks too complicated” to “there’s not enough activity” to “members can’t remember how to get back to the site.” In particular, there was a lot of focus on the site’s homepage, suspected of being overwhelming, confusing, or otherwise unusable. Since this was a member’s first and subsequent view of the platform, it made sense to ensure it was as welcoming and clear as possible.

But, was the homepage the problem? It is very challenging to get any feedback from disengaged members, let alone something as nebulous as first impressions from a website visit.

To better understand what a new user experienced, I designed a user research study that started at the beginning – with a simulated email from the participant’s health insurance company. During the session, we observed participants read the email, visit the landing page, registration page, and the homepage.

Even before we had completed the 5 planned sessions, it was obvious that the homepage was not the main issue. We discovered that the sponsor’s email and our landing page both painted an ambiguous picture. Participants simply did not understand what was offered. Many asked, “Is this a dating site?” (It is not.) “Is this a group that meets in my area?” (It is purely virtual.) Or, they interpreted ambiguous terms in their own ways. For example, we started gaining insight on the various interpretations of “meeting people” or even “learning” – terms we thought were straight forward.

This research was crucial for two reasons:

  1. It gave us valuable insight into how our audience interpreted our messaging, what terminology they used to describe things, and what resonated for them.
  2. It challenged existing assumptions and pointed to a problem no one knew we had.

Before we could tackle engagement, we needed to ensure new members were set up for success. Focusing on acquisition first would also help increase the odds that people who may be a good fit would sign up in the first place.

Software QA Test Tool

The second product was a test automation tool for software teams. It had a 2 week free trial and, like any trial, the hope was trial users could quickly get up to speed, discover the product’s value, and become paying customers.

When I joined this project, the product was mostly built and nearing public release. My initial usability testing with new users discovered a number of usability issues. However, due to the impending release, we could make only superficial changes.

After release, conversation rates and feedback from Sales and Support confirmed we had a problem. Product leadership tasked the team to “fix the onboarding.”

We tried different approaches that avoided big product changes:

  1. We put in a few subtle hint popups attempting to steer new users to the most relevant pages
  2. We broke up the very first interaction into a few steps to provide more explanation and force a couple of actions we thought were necessary to fully evaluate the product
  3. We added additional tooltip flows using Appcues to guide the new user through their first key tasks

While we saw limited success with each of these changes, they didn’t move the needle. Again, I continued usability testing to understand what trial users were doing and thinking. The results built on the initial testing. My initial conclusions clarified into an inescapable truth – the product had fundamental issues that no number of bandaids could fix.

New users were confronted with too many product features and concepts all at once. And, because of the design, there was no way to avoid or defer many of the most confusing or complicated aspects. As a result, many test participants failed to complete the most basic task a QA test tool should support – creating a test – during a test session. That is not a good first impression.

There was no way to “fix the onboarding” and we’d have to take a step back and figure out how to fix the product.


In part 3, I focus on the quick fixes we tried and their pitfalls. In parts 4 & 5, I’ll tell you more about how we tackled the problems described here.

Leave a comment