“I also like this because it is the essence of science: choosing between theories (including no theory) based on predictions. The more unlikely the outcome, the more you learn. You’d never know this from 99.99% of scientific papers, which say nothing about how unlikely the actual outcome was a priori — at least, nothing numerical. I can’t say why this happens (why an incomplete inferential logic, centered on p values, remains standard), but it has the effect of making good work less distinguishable from poor work.”I think Seth just answered his own question.
There’s a whole industry of academies, scientists working at those academies, and journals those scientists publish in to advance at their acadamies that depends on good work being less distinguishable from poor work.
A better standard might reveal that a lot of those scientists aren’t doing valuable work…
One gets the idea that Science is broken, perhaps intentionally.