Expecting the unexpected vs not expecting the expected
No-one expects the net present value in the eNPV
No-one expects the net present value in the eNPV.*
Many companies say this doesn’t matter, because:
‘we don’t believe it, but it is useful because the same errors go into everyone’s calculation’
‘we don’t think it is accurate, but it is directionally useful’
‘we don’t believe it, but it is a useful exercise for the team to go through’
‘we do produce an eNPV for each project, but we don’t really believe any of them…’
Many of the arguments in support of the eNPV recall the famous rebuttal to Nobel laureate Kenneth Arrow, a young statistician during the Second World War:
“The Commanding General is well aware the forecasts are no good. However, he needs them for planning purposes.”
or Dwight Eisenhower’s speech in 1957:
I tell this story to illustrate the truth of the statement I heard long ago in the Army: Plans are worthless, but planning is everything. There is a very great distinction because when you are planning for an emergency you must start with this one thing: the very definition of “emergency” is that it is unexpected, therefore it is not going to happen the way you are planning.
There are two main opportunities to upgrade the eNPV/ rNPV.
The first is simple: there is no single eNPV, by definition. An average forecast of the hundred different ways a molecule could launch, and the paths it might take to get there, is not useful. Each of those one hundred would have a different eNPV: time to market, costs, value, risks, etc. It is better for you to have all of those one hundred than to have a single compromised number that represents none of them. It is better for you to lean into that complexity than to ignore that it exists, especially when you accept that your competitors may well be running the same process, but with different assumptions.
In early stage, your forecast of risk (probability of technical success, especially) should be extremely low. It makes no sense to import average attrition rates for the disease area, or any of the other surrogate numbers that are used. The economics of path dependence make risk calculation risky - choosing a linear path will lead to overestimate of probability, while ignoring paths to the side that might have been better, or the opportunity to recalculate at a point in time ahead.
(Industry-average attrition rates have a related problem: the success of therapeutic areas like oncology in phase I to market, versus areas like CNS, means oncology projects are prioritised by stealth, via tools like the eNPV.)
If, in early stage, your forecast of probability of technical success is high, you’re certainly not calculating it right. But you probably are gaming the portfolio management process. If one team unilaterally tells the whole truth, their programme will be killed because every other team inflated their confidence - the tragedy of the commons is at work. Development process means there is no incentive to reveal real probabilities. So market size and share is inflated, as is the probability of getting there - a sophistry we all know is happening.
If in reality, no-one believes the forecast for an early stage asset, or the probabilities of success (even McKinsey acknowledge that they’re all wrong, and wrong in both directions), we might ask if they are indeed useful exercises for teams to go through. Is the planning useful? Well, it would be if the goal was to produce range, rather than a single project value estimate. As soon as the team decide on a compromise TPP, and the eNPV, it will be gamed to look positive - adherence rates will be bumped up by 5%, PTS by 8%, market share by 2%, etc… If instead the team focused on good numbers for each of a wide range of paths, and were incentivised to produce accurate assumptions, the eNPVs would still be wrong (inevitably wrong) but they could be useful: the range would provide choice to portfolio managers, as well as risk mitigation strategies that recognise path dependence in early phase. The team would be having an interdependent series of conversations about opportunity, not seeking consensus on risk.
A learning company has to diagnose the parts of its system which stop the learning. The eNPV hides so many mistakes within its truthiness that it can only be dogmatism, masquerading as pragmatism. It may be hard to remove, but opportunity seeking can’t co-exist with the current process.
*I’ve used eNPV throughout as the risk-adjusted NPV is even more badly named for a similar process - it ignores so many risks, practical and opportunity risks for example, that it is under-adjusted by design.