If I open a second instance of cygwin and run another instance of this program, will windows/cygwin know to execute it on a separate core? Brian (the author) just sent me his timings (attached) to give me an idea about what to expect. #3 is the shortest, #50 is the longest.
Yeah I don't really know what the norms are surrounding this sorta thing. I really don't want to send these guys a huge fuck-you. But it is basically accepted at a real journal...
this new one I think? Does its going to press change anything?
Its good to hear their approach seems fairly reasonable to you. It seemed weird to me to bring periodic functions like cosine into it... my intuition is that the density for unreciprocated ties would achieve perhaps 3 local maxima... one somewhere in the middle, and the two "corner solutions" at max rank differences.
I have another friend who suggested eliminating
α and β as separate parameters, i.e. don't assume reciprocated ties and unreciprocated ties are different, just allow that to emerge... so he suggested a mixture of 3 beta distributions.
On Fri, Oct 5, 2012 at 9:31 AM, Eric Purdy
<epurdy@uchicago.edu> wrote:
I feel like, if people on this list have the energy, we could actually
reimplement simplified versions of their experiments in scipy. Would
that be rude or something? Certainly in traditional peer-review, it is
a huge fuck-you to try to improve someone's results that you have been
given a preprint of.
> Eric, I don't want you to feel pressured to explain, but let me throw a few
> questions out there. Do you agree with the decision to use a poisson
> formulation of the random network generation process?
I haven't looked that closely at the random network generation
process, but in general I am strongly in favor of generative models
like that. Bayesian methods typically try to maximize the "posterior
probability" of a model, as follows:
P(model | data) = P(data | model) P(model) / P(data)
If you only care about maximizing P(model | data), then you typically
take logs and throw away constants:
logP(model | data) = logP(data | model) + logP(model) - logP(data)
= logP(data | model) + logP(model) + C
P(model) is called the prior probability of the given model. P(data |
model) is some proposed way that you would randomly generate the data
observed given the model and model parameters.
> Also, is it surprising that:
>>
>> ...The function β takes a more complicated form which we parametrize as a
>> Fourier cosine series, keeping five terms and squaring to enforce
>> nonnegativity, plus an additional Gaussian peak at the origin
I would probably just use polynomials? Then you could require alpha to
be composed of only even-powered terms (1, x^2, x^4, etc.) and beta to
be composed of only odd-powered terms (x, x^3, x^5, etc.). This would
force alpha to be symmetric (alpha(-x) = alpha(x)) and beta to be
anti-symmetric (beta(-x) = -beta(x)). But maybe there is some reason
why that wouldn't work.
--
-Eric