I feel like, if people on this list have the energy, we could actually reimplement simplified versions of their experiments in scipy. Would that be rude or something? Certainly in traditional peer-review, it is a huge fuck-you to try to improve someone's results that you have been given a preprint of.
Eric, I don't want you to feel pressured to explain, but let me throw a few questions out there. Do you agree with the decision to use a poisson formulation of the random network generation process?
I haven't looked that closely at the random network generation process, but in general I am strongly in favor of generative models like that. Bayesian methods typically try to maximize the "posterior probability" of a model, as follows:
P(model | data) = P(data | model) P(model) / P(data)
If you only care about maximizing P(model | data), then you typically take logs and throw away constants:
logP(model | data) = logP(data | model) + logP(model) - logP(data)
= logP(data | model) + logP(model) + C
P(model) is called the prior probability of the given model. P(data | model) is some proposed way that you would randomly generate the data observed given the model and model parameters.
Also, is it surprising that:
...The function β takes a more complicated form which we parametrize as a Fourier cosine series, keeping five terms and squaring to enforce nonnegativity, plus an additional Gaussian peak at the origin
I would probably just use polynomials? Then you could require alpha to be composed of only even-powered terms (1, x^2, x^4, etc.) and beta to be composed of only odd-powered terms (x, x^3, x^5, etc.). This would force alpha to be symmetric (alpha(-x) = alpha(x)) and beta to be anti-symmetric (beta(-x) = -beta(x)). But maybe there is some reason why that wouldn't work.