Three steps to making confident decisions with data

This is an exciting time to be making business decisions. Today’s CEO typically has gigabytes more data available than they would have 10 years ago. There has never been a better opportunity to make decisions informed by real world evidence. The problem is, if you’re not nitpicky with how you process your data, your decision-making will be no better than pure intuition, and quite possibly worse. To make a confident decision requires an iterative approach. 
 
In my experience, any business attempting to make the right decisions from their data will encounter three levels as they ascend from “no data” to smart decisions. Even businesses like Bonanza, with nearly 10 years of experience using data, still undergoes all three levels before we make an important decision. As best I can tell, you can’t skip a level.
 

Level one: gather any data

To explain what this level entails, let’s work from a real world example. We’ll look at the time when I decided whether to launch webstores. The risk of webstores was that we would struggle to build a more useful experience than standalone webstore businesses like Shopify. But if we could beat them on ease-of-use, maybe there was a business opportunity there? Let data lead the way.
 
For a decision like this, we need to ensure that webstores would be likely to at least recoup the $50k of programmer salary that would be required to launch webstores. Here’s some data we considered gathering:
  1. Number of booths making at least 10 sales per month (good webstore candidates)
  2. Number of booths currently paying for on-site memberships (proxy for those willing to pay monthly fee)
  3. Number of sellers that indicate webstore interest via a poll
  4. Number of customers advertised by competitors (gauge market size)
  5. Number of booths that have logged in during last week (proxy for highly engaged stores)
None of these can directly predict what we want to know (“what is the sum of monthly revenue we can expect?”), but they seem like a good starting point to assess webstore participation. For managers who want data not already in the database, level one requires implementing the data gathering mechanism as well. As you will often find, “implementation the data gathering” is an undertaking steeped in peril. 
 

Level two: gather the right data

I don’t know why this level is necessary. Someday, I will pick the right factor to measure, and I will implement that measurement correctly on the first try, thereby jumping directly from level one to level three. It will be amazing! Only problem is that it has never happened thus far and probably never will.
 
Instead, what seems to inevitably occur is an adjustment period following level one. During this adjustment period, aka “level two,” I’ll suddenly realize that what we’re measuring is either misleading, or outright wrong (from errors in how we implemented the measurement). The further the data strays from intuition (aka “the more useful it is”), the more likely there’s an error in collection. It turns out there is near-infinite range of ways that your chosen measurement technique can mismeasure the thing you’re trying to understand.
 
Looking at first example from level one (“Number of booths making 10 sales per month”), we wanted to believe that lots of sales corresponded to being a good webstore candidate. It turns out that isn’t true. Our most successful sellers are often the busiest sellers, connected to the most platforms. Level two research teaches us that these sellers are no more likely to sign up for a webstore than an average seller.
 
The bigger problem with all of the ideas from level one is that they offer zero assistance in translating webstore signups to revenue. For this, we need to test signups that would occur at a specific pricing threshold. To gather this data, we settled on using a “fake door,” a technique often advocated to me by Dan Shapiro in early Bonanza days. The idea is that you build a landing page with a specific sales pitch, and see how many people will purchase what you’re selling.
 
Once we built a model that accurately measured the percentage who would sign up for webstores, we had accurate & well-targeted data. We could finally move to level three.
 

Level three: visualize the data

In Bonanza’s olden days, our data would most often be viewed through a custom-built dashboard. This worked great for the first few dashboards. But eventually we ended up with this more and more “meta-dashboards,” like this:
 
 
Except that is barely even half the options on one of the “meta-dashboards” we created. We were collecting reams of data, and lots of good data, but the ability to follow up on data gathered was severely lacking. 
 
Enter admin dashboards:
 
 
“Admin dashboards” are a set of top-level categories of stats that we want to track. For instance, we want to track stats for our performance with “webstores,” so when that button is clicked, we get a page worth of data, visualized in graphs, like so:
 
This is a great start! Here, we have defined a reasonably small number of categories (about 10) for all our data gathering efforts. We can find the data we created. And once we find it, our data can be understood quickly thanks to the graphic representation. But there is still a key limitation to this visualization: the correct date range and graph type varies from stat to stat. Enter per-graph chart options:
 
Now this is data with benefits! For chart in the admin dashboard, we can adjust the visualization’s date range, date increment, graph type, and sum/average setting. The superset of these options has allowed us to create more than 100 different charts to measure anything from simple graphs like the above, to pie charts showing our referral sources over varying time intervals. 
 

Takeaways

Translating data into accurate, useful insights doesn’t happen by luck or accident. It doesn’t happen for teams that get distracted after completing level one. It happens when a team is focused enough to collect data (level 1), but no actually, the right data (level 2)… and then make follow up with the effort to make that data easy to find and digest (level 3). 

The foremost point I would highlight is how seldom a first-take effort at data gathering is sufficient. Unless you’ve confirmed your data against a second source, like Google Analytics, it’s probably being collected with at least some random noise in it. Seek a secondary measurement that agrees with your initial numbers, or a second set of programmer eyes to look at the implementation. Once your secondary source agree with the numbers you’ve collected in your database, go forth to the land of visualization and make yourself some impeccable decisions. 

Be the first to comment

Leave a Reply

Your email address will not be published.


*