Understanding which customer behaviors lead to conversion is the fastest way to know which product changes to make.

As a data person, I frequently get asked for help from product teams. Over the years, I spent quite a bit of time explaining the difference between good and bad questions.

Typically product teams want to improve something (say conversion). They’ll come to me for advice on what to measure —and nearly all the time they want to discover which product changes will have an impact on their metric.

“I want to know whether X matters so I’ll split my audience, test the product change, and see if it makes an impact”.

Notice that this question is from a product perspective: ‘did a product change have an impact?’, instead of a customer perspective: ‘which customer behavior leads to conversion?’

I know this sounds like such a small nuance, but it’s often the root cause why product teams launch features that don’t increase conversion.


Here are some common questions from the product perspective:

  • I want to see if when we launch a class affects the number of people who sign up. We’ll launch the same class to half our audience at 4 pm and the other half at 5 pm. Then we’ll see how the different cohorts react.
  • I want to test which email messaging leads to more sales. I’ll split the audience in half, send each a different email, and see how many of them purchase.

The problem with these that there is no causality. If we want to increase a customer metric (like conversion), we have to ask a question about customer behavior.

Let’s look at the first example to illustrate.


“I want to see if when we launch a class affects the number of people who sign up.

Launching a class doesn’t affect customers at all. Why not? Because nothing has happened to the customer. Do they know you launched a class? If you sent a notification did they see that notification? There’s no direct connection between ‘launching’ and user behavior.

If there’s no direct connection it’s not causal — in other words, if you ‘improve’ the time at which you launch a class time you will probably not increase conversion.

Let’s try something more direct: “How does the time of day of viewing a class affect the customer likelihood to sign up for that class?”

‘Viewed a class’ is definitely a customer action. And this is a far better question for two reasons

  1. we don’t need to run an A/B test at all to answer this question. We already have this data (because customers have already viewed classes at different times) and can start answering it immediately
  2. if we conclude that the time to view a class matters, we now have a concrete action to take (try to get more customers to view a class at X time).

The question of HOW to get customers to view classes at a more optimal time is a great product question, and its progress is very easy to measure.


The approach now is straightforward. Once you know the best time for customers to view a class, you can measure the change in the percent of customers viewing the class at this time.

  1. Create a metric by dividing customers into buckets based on the time they’ve viewed a class. (20% of customers viewed at 10:00 AM, 15% at 11:00 AM …)
  2. Make some product changes to shift the percentage of customers in each bucket to the desired one.
  3. As the percentages change, look at the overall conversion rate again. It should increase.

What about the other question?

“I want to test which email messaging leads to more sales. I’ll split the audience in half, send each a different email, and see how many of them purchase.”

The customer doesn’t care about receiving an email. It could be marked as spam or bounce, or not be noticed in many other ways.

So what’s a good customer action?

“How does opening email A vs opening email B affect the likelihood to purchase within 30 minutes?”


One primary reason I get these types of questions is that product facing things changes are easier to control. You can’t control exactly when a customer views a class, but you can control when you launch it to the web site.

The entire point of product questions is to understand which product changes will increase conversion. By re-framing questions from the customer’s perspective you’ll understand which customer behaviors matter. You’ll now be solving a much simpler problem: how can I change the product to affect a specific customer behavior?


What can I do about it?

These mistakes are natural in a data-driven org. Part of the reason I co-founded Narrator was to counteract this — our data system is designed to only ask questions from the customer’s perspective.