As a positive distraction from my thesis writing, I've been thinking a bit about the statistical crisis in biomedical sciences and psychology research, and how it might be mitigated.

A number of opponents of p-values and proponents of Bayesian inference have influenced my thinking around this issue. As such I have come to the conclusion that Bayesian estimation and inference should be more widely used, because it essentially comes with interpretable uncertainty built into the inference philosophy.

I think one thing preventing adoption of Bayesian inference methods is their flexibility (read: complexity). How does one compose an model with little grounding in statistics?

To address this problem, I've started putting together Jupyter notebooks showing common problems in the experimental sciences and a sensible default model that one can use for that kind of problem.

For me, a recurrent (and very interesting) theme came up. The nature of probabilistic graphical models is such that if we are able to forward-simulate how the data may be generated, then given the data and a loss function, fitting the data is merely a matter of optimization. The core idea behind these notebooks, then, is that there are a small number of "generic" models of how data may be generated that can cover a large proportion of scenarios, particularly in scenarios where we don't have sufficiently good theory to forward-simulate the complex data-generating distribution underlying the data.