I’ll keep ’em shorter from now on I think.
A probabilistic model for the establishment of neuron polarity
Kostya Khamin and Raya Khamin
J. Math Biol. 42, 26-40 (2001)
This paper seeks to answer a puzzling question in neuron developent — why does only one neurite (site on the neuron) out of many on a neuron grow into an axon? The answer they have lies in a particular kind of inhomogenous Markov chain or random walk that has also been called a “balls-in-bins process with feedback.” This kind of model has also been used in analyzing competitive scenarios in other fields as well as preferential attachment models for networks. They model neurites as bins and the length as the number of balls in a bin.
The basic model is this: suppose we have k bins which contain balls. Let a(t) be a length k vector whose entries are the number of balls in the bins at time t, and let f(n) be function that we will call the value of having n balls. At each time, a ball is added to one of the bins at random. The probability that
bin j is chosen is:
![]()
That is, the probability is equal to the value of bin j over the sum of the values of all the bins.
In this paper they look at value functions of the form
![]()
and show that there are three regimes of behavior:
In order to prove these cool and weird results, they embed the arrival process of balls in each bin into a sum of exponential random variables whose rate changes over time (according to how many balls are in the bin). That is, if a bin has r balls, the rate of the next term in the sum is f(r). The bin whose exponential random variable is smallest gets the next ball — the min operation on exponential random variables gives the probability ratio above. That is, the probability that the smallest exponential random variable was the j-th one is exactly that probability above. Thus the probability distribution for the number of balls is identical for the discrete-time model as well as the continuous time embedding. Once they have the embedding it’s a bunch of other tricks to get the results in nice form.
A more recent paper by Oliveira has a nice extension of these arguments for more general f(n) by approximating the embedding with Brownian Motion. As it turns out, the critical thing to check is the summability and square summability of 1/f(n).