Two out of three...?
Ain't bad. Ain't good...
One piece of evidence may be better than none, but it can be more confusing.
Shorn of any context, this snapshot of a few hours of my heart rate raises more questions than it provides insight. It may be true, and useful, but only in the context of other vantage points.
As you have no idea what I was doing around 12 noon, you can generate guesses. Maybe, if you know me, you can generate more informed guesses. It is 'data', part of the vast data lakes and mountains so many talk about with enthusiasm. But, even with the best, most informed guesses in the world, good luck turning it into something, unless another source of 'data' tells you what I was doing. So: information ≠ data, at least in terms of utility. This does not stop fans of AI/ ML suggesting that pointing their computers at it could find something useful.
Now: tell me whether there is any value in averaging those data - my 'average' heart rate during the day? A mean of my and Elon Musk's income would leave you with much more of a hint at his than mine - here, knowing lowest rate might be useful, as might highest rate, if you also know what was driving it. The question would be: is it reducible? Would my daily average heart rate tell you anything, even if you had a year of days? My Apple Watch may think it can take a good guess at a resting heart rate, because its rather clever accelerometer also knows when I am not moving. But my rather clever Apple Watch no more knows whether I am riding a stationary bike, watching Squid Game or listening to Newcastle United hanging onto a one goal lead. It, and pretty much every other app out there, still needs me to tell it what I am doing when I start tracking.
The ability to learn something from data has to come from a better question set, a better context. Anyone who 'saw' me on Zwift at this time already has an advantage over a room full of people with hypotheses, but no Zwift.
In a pharma company, who is the question setter, and who is the data gatherer? In many cases, they're the same. With incentives in most companies that drive risk mitigation, instead of opportunity seeking behaviour, someone ends up with the alligators.
The behaviour is endemic. If you calculate your 'success rate in phase I' and aggregate that with your later phase percentages, you can make your number go up easily. One company (a top 5) rather strangely suggested they were the most productive in pharma because (and I'm not kidding) they multiplied their success rate at phase I, phase II and phase III together, and said the resulting number showed they were twice as good as their competitors. It's a meaningless claim, made laughable by the first grade arithmetic, but it was put out by their corporate communications as a vindication of their R&D approach. It is really easy to 'succeed' in phase I. But that is not what phase I should be for... Getting 100% 'success' in phase I and 10% success in phase III would not feel the same if you switched those percentages, even if the overall 'success rate' would calculate out the same. The problem is in definition: 'success' in an exploratory phase typically means lowering the bar on learning.
A balanced question, with a collective. aligned desire to seek opportunity, can keep the mindset positive, opportunistic and collaborative. As soon as 'not failing in phase I or II' becomes the question, the chance of developing meaningful medicines dips below the water...



