5 Common Pitfalls in Data-Driven SaaS Product Management

If you’re a product manager responsible for SaaS applications, you’ve probably been hearing increasing chatter about data-driven SaaS product management.

Leveraging data can certainly help you make better strategic decisions about your products, as opposed to relying only on your intuition. And with SaaS products in particular, actionable data is becoming more readily accessible than ever. But it’s important to go about data-driven product management the right way—because it can also be easy to misinterpret data, focus on the wrong data, or fail to put the data you’re gathering into its proper context.

So let’s walk through some of the most common pitfalls that could trip you up as a data-driven SaaS product manager. First, though, let’s take a look at one example of how gathering and analyzing even mountains of data can still mislead you.

Why Data-Driven Product Management Won’t Automatically Lead to Better Products

On Election Day 2016, The New York Times told readers that all available data strongly suggested a Hillary Clinton victory. According to pre-election polling, the paper wrote, Clinton had an 85% likelihood of winning the presidency. Donald Trump, they pointed out, had a 15% chance—or “about the same as the probability that an NFL kicker misses a 37-yard field goal.”

During that election season, few media outlets had access to more data or more sophisticated analysts than the political team at The New York Times. Their approach was certainly “data-driven.” And yet… well, you know the rest. They were wrong. Bigly.

polling station 2016 election

The data, it turned out, told the pollsters and political analysts only one side of the story. It couldn’t tell them, for example, about the voters who liked Trump’s message but were afraid to admit that to pollsters, or about the voters who simply refused to take those survey calls in the first place, or about… who knows what other details data simply can’t tell you?

This is an extremely important lesson for PMs: Data-driven product management won’t automatically lead to better products because, although data is valuable, it’s not a conclusion in and of itself; data is just information to help you reach your own conclusions. If you misuse it, or rely on it to the exclusion of other factors (like the larger context), data can actually do more harm than good to your products. Here are some common examples.

5 Common Risks in Data-Driven SaaS Product Management

1. An exclusive focus on one “north star” metric leads to missed information.

It’s a smart strategy to rank the different data you’re collecting and analyzing according to their importance to the broader goals of your products or company. That means conversions or revenue will probably deserve to be weighted more heavily than, say, the number of Twitter followers your company is amassing.

(If you’re looking for ideas about how to rank various metrics to inform your product decisions, we’ve got an entire post on developing a SaaS product metrics pyramid.)

Such a ranking system will inevitably mean you select one top metric that you regard as more important than all of the others—your “north star” metric. But it’s equally important not to focus on this metric to the exclusion of all the others. While establishing a “north star” metric is a useful way to stay focused on the metrics that matter, you need to remember to step back and see the bigger picture too.

The north star metric for The New York Times’ analysts during Election 2016 was clearly the percentage of likely voters who stated in surveys that they planned to vote for Hillary Clinton. But, because that metric might have seemed like such a useful proxy for the eventual real votes, those analysts might have focused on it to the exclusion of other important metrics—like the respective audience sizes at each candidate’s campaign rallies, for example. Factoring in data points like those might have led the experts to see a different picture emerging.

2. Too much focus on data leads to analysis paralysis.

As a Becky Kane writes, gathering too much data can lead to overthinking, which can lead to all sorts of problems.

For product managers trying to be more data driven, the analysis-paralysis problem can be particularly sneaky because it will likely start with us making a good-faith effort to compile and study relevant, actionable information before making our next move with our product. Which is exactly what we’re supposed to do, right?

But then one piece of data suggests we should seek another piece of data, which leads to yet another, and soon we’re in a continuous loop of compiling and analyzing because we fear the next piece of data—which is out there somewhere—would tell us something extremely valuable. So we become nervous about making any decisions without doing still more research.

The science-supported recommendation in the Doist blog, which we would agree with, is to first determine a big-picture objective—in your case as a data-driven SaaS product manager, that would be determining your product’s set of strategic goals. From there, you can review and analyze various data points, but then it’ll be time to stop analyzing and start make some decisions with the information you have.

3. Cognitive bias leads to missed insights or incorrect conclusions.

When making data-driven decisions, or any decision about product strategy for that matter, we need to be conscious of our own inherent cognitive biases. As Cindy Alvarez pointed out in her recent talk at Mind the Product, cognitive biases can easily outsmart us.

“You are all an incredibly smart audience, and it doesn’t actually matter, because cognitive biases are really going to screw up what you do anyways,” she said.

One type of cognitive bias that seems particularly relevant here is the confirmation bias. That’s the bias that causes us to seek out evidence that proves our pre-existing theory correct, while ignoring or downplaying evidence that cuts the other way. (As Alvarez notes, you might not even be aware you’re doing this.)

Imagine you’re responsible for a B2B SaaS product. While reviewing usage data from your installed base, you discover that customers aren’t using a particular feature that you believe is extremely valuable for their workflow.

If you went into this data analysis looking for objective insights, you might consider several possibilities:

  • Maybe my team and I were correct that this feature would be valuable, but we built in a way that doesn’t work for our users.
  • Maybe my team and I were just wrong in our assessment that users would want or need a feature like this.
  • Maybe my team and I were correct about the value of the feature—and we also built it correctly—but where we placed it in the app makes it difficult to find.

Now, with those possibilities (and maybe others) in mind, you can go out and conduct some interviews and surveys to determine the truth. That’s data-driven SaaS product management.

But if you were vulnerable to confirmation bias, you might not even consider the first two possibilities—and gravitate immediately to the third. In other words, of course we’re right about the feature, and of course we built it in the right way; our users just haven’t found it yet.

You might even call a user and phrase your question this way: “If we moved [XZY feature] to a more central place in the app, might you be more likely to use it?”

This type of bias can lead even a data-driven PM with the best of intentions to focus resources on the wrong things and ultimately undermine the product.

4. Inefficient data collection leads to decisions that come too late.

This common pitfall for the data-driven SaaS product manager can be caused either by technology or simply by the PM’s own processes and comfort level.

In terms of technology, SaaS PMs trying to compile data on their products no longer need to reinvent the wheel by asking their internal teams to build tools from scratch to track and analyze usage data and other important metrics. Plug-and-play SaaS analytics tools are now available and affordable for even the most cash-strapped startups.

But many organizations have not yet adopted these off-the-shelf tools, and as a result they often go through the inefficient process of either developing in-house analytics platforms to track their own products or using manual processes, such as building spreadsheets to compile and analyze key data points.

In terms of PMs’ own processes, many don’t feel “technical” enough to build or study large data sets, so they rely exclusively on more anecdotal research—such as calling a single user and having a long conversation about the product.

Of course, both of these types of pitfalls—using outdated data-analysis technology or taking a more manual approach to gathering user information—can prevent a PM from having immediate access to statistically significant data sets. Without this ready access to large amounts of relevant information, PMs can miss out on much of the reason they sought to become more data-driven in the first place—the ability to make informed product decisions and adjustments quickly.

5. A misinterpreted or over-weighted piece of data leads to in an inaccurate conclusion.

Finally, sometimes a single piece of data can strongly suggest a specific conclusion but, if viewed in the larger context of more data or other factors, would lead us to another conclusion entirely.

Imagine you were conducting a survey of potential buyers for your new enterprise SaaS app, and the results of that survey told you a significant percentage of respondents—say, 65%—planned to buy your product when it launched. Great news, right?

Maybe. But does that one data point reflect the entire story? Does it mean that 65% of all of your market—or even 65% of the people you surveyed—are going to buy your app? Consider these other data points:

  • What percentage of those respondents are end-users and what percentages are actually the buyers or decision-makers for their companies?
  • How close were you to launching your product when you sent out that survey? If you were months away, can you reliably assume those numbers will still be accurate on launch-day?
  • What percentage of your survey respondents actually responded? And what, if anything, do these numbers tell you about the people who chose not to answer your survey?

This problem, misinterpreting or over-weighting a single piece of data, could be the result of several of the other pitfalls we’ve discussed. You might become more confident than is warranted based on this 65% positive survey, for example, if you’re already convinced your product is going to be a hit and this survey triggers your confirmation bias.

The point is, you always need to interpret each piece of data in its proper context—which, in some cases, means analyzing it against what seems like contradictory data.

Data-Driven SaaS Product Management Works Only if You Avoid the Pitfalls

You should be incorporating data at all phases of your product development process by gathering, analyzing, and acting on data related to your products and users every chance you get. SaaS product managers should be data driven. Product roadmaps should be data driven. And the decisions you make about your products should, wherever possible, be influenced by data.

Just make sure you’re focusing on the right types of data, interpreting it the right way, and using the most intelligent tools and processes to gather and analyze it. One great way to make sure you’re doing all of these things correctly is to keep this list of common pitfalls handy—and check in regularly to make sure you’re not falling into any of them. But most of all, remember to step back every now and then and see the big picture from a human perspective.

To learn more about how to build a data-driven roadmap, watch our webinar:

 

Read next