A very odd method where logistic regression would have sufficed. I would have done something like
to do a wiggly logistic regression, and avoid the binning issue entirely.
gam(y ~ s(x), data, family = binomial)
You can do it Bayesianly if you like, but I don't see why we should discretize the data into buckets.
And what am I supposed to take away from the normalized histogram?
Calling this Bayesian seems a bit like wishful thinking at the moment, you could just as easily have called it frequentist as the main mechanism is merging adjacent bins based on a p-value.
A truly Bayesian approach would require specifying a likelihood function for the data based on the choice of bins and turning this into a posterior distribution on the choice (and number) of bins.
Calculating the maximum likelihood estimate for the simplest such likelihood function (samples within a bin are uniform + the number of bins is geometrically distributed) can be done with a vaguely similar algorithm, but simply merging adjacent bins greedily is almost certainly biasing the result right now.
I basically determine p via Bayesian inference within every bin (via a conjugate beta prior for p which gives a beta posterior). If that’s not Bayesian then I don’t know what is :)
Yes the pruning can be done with a frequentist method too. Yes you can come up with smarter / more statistically sound ways to construct these binnings. Do they work on >1e9 data points?