Ben Hanowell writes:
I’ve worked for tech companies for four years now. Most have a key performance indicator that seeks to measure the rate at which an event occurs. In the simplest case, think of the event as a one-off deal, say an attempt by a buy-side real estate agent to close a deal on a house for their client. Suppose that we know the date the deal began, and the deal may have been closed and won, closed and lost, or still be open (therefore not closed), and we know the date that those events occurred.
In most companies, the way they measure this rate is by counting the number of deals that get closed within D days from the day the deal began (e.g., the day the client first reached out to the agent). Usually the closed-and-won outcome is the outcome of interest, but closed-and-lost rates are also calculated. The value of D varies, and most businesses look at the number for different values of D, e.g., 30, 60, 90, 180, 360. This metric is easy to interpret, but it has drawbacks. For one, the D-day close rate can only be calculated meaningfully from deals that occurred at least D days ago. For another, all deals that occurred occurred at least D days ago and that started within the period of interest are counted equally in the denominator even though some of those deals spent far more person-time units than others at risk of getting closed.
Another metric companies often use is the time between the deal open and close dates, and usually there is a different metric reported for days between open and closed and won versus closed and lost. This metric is problematic because it leaves out all those deals that aren’t closed, and so it will usually under-estimate the expected lifetime of a deal. I usually strongly advocate against its use.
For these reasons, when I come to a team, I try to reach back into my formal demographic training, where I learned that the proper definition of a demographic rate is the number of events that occur in a population of interest out of the total number of person-time units lived by that population. From those close rates, we can infer the expected waiting time, either as the inverse (assuming the rate is constant), or maybe using some life-table method. Or we could get fancier and estimate the survivor function and hazard rate with some kind of event history analysis model. All of these methods require me to know the start date of observation, and (if we’re trying to measure the closed and won rate) the end date (which is either the close and won date, the closed and lost date, or the current date for those deals that remain open, which are basically lost to follow-up).
But an issue usually crops up that reminds me why companies stick with the supposedly inferior D-day close rate metrics. It has to do with the process whereby deals get closed and lost. What happens is that, over time, many deals remain open when they should probably have been set to closed and lost. Meanwhile, deals that were opened more recently obviously would not have many cases where the deal has an extremely old age. As a result, these ancient deals contribute many person-time units to the denominator of the rate for cohorts in earlier years, causing the rate to look like it is increasing over time.
I know there is something I could do that is better than D-day close rates. I know the solution to the problem probably has something to do with recognizing that the closed and lost examples shouldn’t be treated as just lost to follow-up since their censoring is meaningful. I also know that the method I use to infer these rates needs to account for the fact that closed and lost deals should not contribute person-time units to the denominator because they are no longer at risk of being closed and won. I also know that one way to deal with this problem is to construct period-based rates rather than cohort-based rates.
I assume I can blog your qu and my reply. Or I could respond privately but then I’d charge my consulting fee!
The reason I contacted you about this is so that you could reply to it in the public sphere. It’s a big issue in corporate analytics that isn’t handled well by a lot of teams. If your reply is just, “I don’t have time to think much about this, what say you, followers?” I guess that’s okay, too.
So then I read the full message above. I have not tried to follow all the details but it seems like a pretty standard survival analysis problem. So I’ll just give my generic advice which is to model all the data—set up a generative (probabilistic) model for what happens when for each deal. In my experience, that works better than trying to model summary statistics. Make your graphs of what you want, but forget the D-day close rate and just go to the raw data. Then once you’ve fit your model, you can simulate replicated data and do posterior predictive checks on summaries of interest such as those D-day close rates.