Quantcast
Channel: May 2011 – Blame It On The Analyst
Viewing all articles
Browse latest Browse all 3

Coagulation and Coalescent Processes

$
0
0

If you haven’t read my previous post, I would suggest doing so before this. We denote by \mathcal{P}_\infty the partitions of \mathbb{N}. The thing to keep in mind here is that we want to think of a coalescent process as a history of lineage. Suppose we start with the trivial partition (\{1\},\{2\},\dots) and think of each block \{i\} as a member of some population. A coalescent process \Pi=(\Pi(t) :t \geq 0) on \mathcal{P}_\infty is essentially defines ancestries, in the sense that if i and j belong to the same block of \Pi(t) for some t\geq 0, then we think of that block as the common ancestor of i and j.

With this in mind, define the operator Coag:\mathcal{P}_\infty \times \mathcal{P}_\infty \rightarrow \mathcal{P}_\infty by

Coag(\pi,\pi')_i=\bigcup_{j \in \pi'}\pi_j.

With some conditions, we can define the same operator on \mathcal{P}_{[n]}, the partitions of [n]:=\{1,\dots,n\}. So for example if \pi=(\{1,3,5\},\{2\},\{4\}) and \pi'=(\{1,3\},\{2\}), then Coag(\pi,\pi')=(\{1,3,4,5\},\{2\}). The partition \pi' tells us in this case to merge the first and third block and leave the second block alone.

This turns out to be the right tool for describing ancestral lineages as family should only merge together. Let \pi|_{[n]} be the restriction of \pi to [n] in the obvious way, i.e. if \pi'=\pi|_{[n]}, then \pi'_i=\pi_i \cap [n] and let \mathbf{0}_\infty:=(\{1\},\{2\},\dots). Then we have the following nice proposition which follows from the observation that Coag(\pi,\pi')|_{[n]}=Coag(\pi|_{[n]},\pi')=Coag(\pi,\pi'|_{[n]}).

Proposition:

The space \mathcal{P}_\infty with the binary operation Coag defined on it is a monoid with the identity \mathbf{0}_\infty.

With this in mind, a natural question to ask is if the operator Coag preserves exchangeability. Recall that a random partition \pi is exchangeable if for each permutation of \mathbb{N} that fixes all but finitely many points we have that \pi has the same distribution as \sigma \pi, defined by i \buildrel \sigma \pi \over \sim j if and only if \sigma(i)\buildrel \pi \over \sim \sigma(j).

Proposition:

Let \pi,\pi' be two independent exchangeable random partitions, then Coag(\pi,\pi') is exchangeable.

Proof:

Let \sigma be a permutation and \pi'':=Coag(\pi,\pi'). Notice that i \buildrel \sigma \pi'' \over \sim j if and only if \sigma(i) \buildrel \pi'' \over \sim \sigma(j) which happens whenever \sigma(i) \buildrel \pi \over \sim \sigma(j) or k_i \buildrel \pi' \over \sim k_j, where \sigma(i) \in \pi_{k_i}. Consider the cases one by one; firstly we have that the event \sigma(i) \buildrel \pi \over \sim \sigma(j) has the same probability as the event i \buildrel \pi \over \sim j by exchangeability of \pi. Second, if \sigma(i) \in \pi_{k_i}, then i \in \tilde \pi_{k_i} where \tilde \pi=\sigma ^{-1}\pi. Let \sigma' be the random permutation such that \sigma(\pi)_l=\sigma^{-1}(\pi_{\sigma'(l)}), which will be independent of \pi', then we have that k_i \buildrel \pi' \over \sim k_j if and only if i \buildrel \sigma' \pi' \over \sim j, which has the same probability as i \buildrel \pi' \over \sim j by independence and exchangeability.

Thus putting this all together we have that the event \sigma(i) \buildrel \pi'' \over \sim \sigma(j) has the same probability as i \buildrel \pi'' \over \sim j, which proves exchangeability.

\square

Let us now endow this space with a metric in preparation to introduce some Feller processes:

d(\pi,\pi')=1/\sup\{n \in \mathbb{N}:\pi|_{[n]}=\pi'|_{[n]}\}.

The choice of this metric is due to the paintbox correspondence of exchangeable partitions (as described here). It is then an easy task to verify that Coag:\mathcal{P}_\infty \times \mathcal{P}_\infty \rightarrow \mathcal{P}_\infty is Lipschitz.

We arrive now at a natural definition of homogeneous exchangeable coalescent. This will be nothing but a Levy process on the monoid we have.

Definition:

A process \Pi=(\Pi(t):t \geq 0) is called an exchangeable coalescent if for each s,t \geq 0 the distribution of \Pi(t+s), conditioned on \Pi(s)=\pi, is given by Coag(\pi,\pi') where \pi' only depends on t and is exchangeable.

The fact that these processes carry the Feller property is immediate from the properties of Coag, and thus we obtain the strong Markov property.

A famous example of such a process is the so called Kingman’s coalescent in which a pair of blocks merge at rate 1. Without going in to so much detail about the construction of the Kingman’s coalecent, one interesting aspect of this process is that it comes down from infinity, that is to say the number of blocks \#\Pi(t) are finite almost surely for all t >0. To see why this is true we just need to look at \Pi(t)|_{[n]},

\mathbb{P}(\#\Pi(t)|_{[n]}\geq m)=\mathbb{P}\left(\sum_{i=m}^n \frac{2}{i(i-1)}\mathbf{e}_i\geq m\right)\leq \mathbb{P}\left(\sum_{i=m}^\infty \frac{2}{i(i-1)}\mathbf{e}_i\geq m\right)

where \mathbf{e}_i are i.i.d. standard exponential random variables. But now as the sum \sum_{i=1}^\infty \frac{2}{i(i-1)}\mathbf{e}_i < \infty almost surely, this implies that the tail is vanishing and hence we must have that \mathbb{P}(\#\Pi(t)|_{[n]}\geq m) \rightarrow 0 as m \rightarrow \infty.

Before we look more into this phenomena, let us first categorize the coalescents via their generators. Without loss of generality we will henceforth assume that \Pi(0)=\mathbf{0}_\infty and for \pi \in \mathcal{P}_{[n]}\backslash \{\mathbf{0}_{[n]}\}, where \mathbf{0}_{[n]}:=(\{1\},\dots,\{n\}), define

q_\pi:=\displaystyle\lim_{t \rightarrow 0}\displaystyle \frac{\mathbb{P}(\Pi(t)|_{[n]}=\pi)}{t}.

Denote by \mathcal{P}_{\infty,\pi}:=\{\pi'\in \mathcal{P}_\infty: \pi'|_{[n]}=\pi|_{[n]}\} where \pi \in \mathcal{P}_{[n]}. There is a natural way to associate these jump rates with the use of a certain measure \mu which satisfies the following

  1. \mu(\mathcal{P}_{\infty,\pi})=q_\pi for all n \in \mathbb{N} and each \pi \in \mathcal{P}_{[n]}
  2. consequently \mu(\{\mathbf{0}_\infty\})=0 and \mu(\{\pi\in \mathcal{P}_\infty:\pi|_{[n]}\neq \mathbf{0}_{[n]}\})<\infty.
  3. \mu is invariant under the action of permutations

Indeed the converse is also true, that is, if we have a measure \mu defined on \mathcal{P}_\infty such that 2. and 3. above hold, then there exists an exchangeable coalescent such that the jump rates are given by \mu. The construction of such a process is very similar to the case of constructing Levy processes from the jump measure by using a Poisson point process. Though interesting, we leave this aside and take the statement on face value.

As a concrete example, consider the Kingman’s coalescent described above. Recall that in the Kingman’s coalescent, any two blocks merge at rate 1, so that if we let K(i,j) be the partition which is all singletons except contains the set \{i,j\}, then the rate of the Kingman’s coalescent is given by \sum_{i,j} \delta_{K(i,j)}.

For the reader who is not comfortable with jump measures of Markov processes and/or is confused about the discourse above, not to worry. You do not need to understand all of that abstract non-sense, what \mu describes is the rate of transfer in an infinitesimal amount of time.

There are nice decomposition results for jump measures of coalescent processes given in the Bertoin book (see below). We will be looking at the phenomena of coming down from infinity, that is, the process having finitely many blocks a.s. for any non-zero time. Of course there is the trivial case that we should disregard, which is the case when \mu(\{\mathbb{N}\})>0. This is the case when we have positive probability of going from the state \mathbf{0}_\infty to \{\mathbb{N}\}. In this case it is easy to deduce that eventually we will have finitely many blocks (in fact, one block).

With that aside and a little adaptation of the Schweinberg paper, I present to you a pretty cute result.

Theorem:

Suppose that \mu(\{\mathbb{N}\})=0, then either \#\Pi(t)<\infty a.s. for all t>0 or \#\Pi(t)=\infty a.s. for all t\geq 0.

Before we prove the theorem, let us see a lemma that will aid us with the main ideas of the proof. A warning I should give here is that the process \#\Pi:=(\#\Pi(t):t \geq 0) need not be Markovian, let alone strong Markovian. As far as I am aware, this question still remains open.

We would like to first show that regardless of where you start your process, the time you come down from infinity is the same if your starting point has infinitely many blocks. For this notice that \tilde \Pi(t):=Coag(\pi,\Pi(t)) is the same as \Pi started from \pi.

Lemma:

For \pi \in \mathcal{P}_\infty let T_\pi:=\inf\{t \geq 0: \# Coag(\pi,\Pi(t)) < \infty\}, then for all \pi \in \mathcal{P}_\infty with \#\pi=\infty we have T_\pi=T_{\mathbf{0}_\infty}.

Proof:

Notice that from the definition \# Coag(pi',pi'')= \infty if and only if \#\pi'=\#\pi''=\infty (just check what happens when one of them is finite). With this observation we see that in the case when \#\pi=\infty, \# Coag(\pi,\Pi(t))< \infty if and only if \#\Pi(t)<\infty.

\square

Proof of Theorem:

Let T=\inf\{t \geq 0:\#\Pi(t)<\infty\} and p=\mathbb{P}(T=0). Suppose that p>0, then by the above and the Markov property we have that p=\mathbb{P}(T=k\epsilon/n|\#\Pi(k\epsilon/n)=\infty) for all \epsilon>0 and k, n \mathbb{N}. So now by he recurrence relation we have that \mathbb{P}(T>\epsilon)\leq (1-p)^n, but n, \epsilon>0 is arbitrary, so it must be that p=1.

Next suppose that p=0, we will prove that \mathbb{P}(0<T<\infty)=0. There are some cases to consider which we list and do in order.

Case 1: \mathbb{P}(\{0<T<\infty\}\cap \{\#\Pi(T)=\infty\})=0

This follows from the strong Markov property and the lemma we have proved above. If \Pi(T)=\infty and 0<T<\infty, then \inf\{t \geq 0: \#\Pi(T+t)=\# Coag(\Pi(T),\Pi(t))<\infty\}=T which is an obvious contradiction.

Case 2: \mathbb{P}(\{0<T<\infty\}\cap \{\#\Pi(T)<\infty\}\cap\{\#\Pi(T-)=\infty\})=0

This follows from the fact that we have declared \mu(\{\mathbb{N}\})=0. In particular this implies that we cannot merge all but finitely many blocks together in one merger time, that is, if \tau:=inf\{t\geq 0: \Pi(t-)\neq\Pi(t)\}, then \#\Pi(\tau-)=\#\Pi(t).

Case 3: \mathbb{P}(\{0<T<\infty\}\cap \{\#\Pi(T)<\infty\}\cap\{\#\Pi(T-)<\infty\})=0

Suppose that \Pi(T-)=(\pi_1,\dots,\pi_n,\emptyset, \dots) and let l_i:=\inf\pi_i. Then define recursively T_0:=0 and T_k:=\inf\{t>T_{k-1}: l_i \sim s_k \text{ for some }i\geq 1\} where s_k:=\inf\{s \in \mathbb{N}: s \buildrel \Pi(T_{k-1})\over \nsim l_i \text{ for all } i\geq 1\}. In plain English, T_k is the first time that the smallest integer not in any of the blocks of \Pi(T_{k-1}) containing l_1,\dots l_n, joins a block containing some l_i.

From this description we can see that T_0\leq T_1\leq \dots \leq T and so it is enough now to show that T_k\uparrow \infty for a contradiction. Notice that if \lambda_{n+1} denotes the total number of mergers involving some n blocks, then the condition \mu(\{\pi\in \mathcal{P}_\infty:\pi|_{[n+1]}\neq \mathbf{0}_{[n+1]}\})<\infty directly implies that \lambda_{n+1}<\infty. But now we have an upper bound on |T_k-T_{k-1}|\geq \mathbf{e}_k where \mathbf{e}_k are i.i.d. exponential random variables with parameter \lambda_{n+1}<\infty. Now the claim that T_k\uparrow \infty directly follows from

T_k=\sum_{i=1}^k (T_i-T_{i-1})+T_0\geq \sum_{i=1}^k\mathbf{e}_i\rightarrow \infty.

\square

There are interesting results on the so called \Lambda-coalescents which is have given a list of below. A \Lambda-coalescent is a subset of the class of coalescents that I was talking about, but has the restriction that blocks can only merge into a single block.

Further Reading:

  • Jean Bertoin, Random Fragmentation and Coagulation Processes
  • Nathanael Berestycki, Recent Progress in Coalescent Theory
  • Jason Schweinsberg, Coalescents with Simultaneous Multiple Collisions
  • Jim Pitman, Coalescents with Multiple Collisions
  • Jason Schweinsberg, A Necessary and Sufficient Condition for the \Lambda-coalescent to Come Down from Infinity

Viewing all articles
Browse latest Browse all 3

Trending Articles