Both the model and the function are very simple. Basically, we scale the rate of evolution along the branches descending from each node by the estimated lineage density at that node. The user can supply a vector of lineage densities, or the function can estimate lineage density by merely counting the number of branches in the tree at the time of the node in question. The latter option, obviously, does not take into account extinction or incomplete taxon sampling. The assumption implicit in scaling branches in this way is that variation in the rate of phenotypic evolution among branches is entirely determined by lineage densities at the origin of each branch!

The way I implemented this in practice was straightforward. I merely rescaled the branches of the tree based on estimated or supplied lineage density, and then maximized the likelihood of the scaling parameter and starting rate given the data. I also allow the user to visualize this branch length rescaling with the option showTree=TRUE.

In the process of adding error handling to this function I also made a curious realization about extracting vectors from data frames and matrices that I thought might be of interest to some readers.

Consider the following matrix:

> X

V1 V2

A 0.949 1.062

B 1.248 1.375

C -0.835 1.047

If we want to extract "V1" we can just type:

> v1<-X[,1]

> v1

A B C

0.949 1.248 -0.8355

Here, v1 inherits the row names of X. By contrast, if X is a data frame:

> X<-as.data.frame(X)

> X

V1 V2

A -0.264 0.100

B 0.026 -1.961

C -1.408 -0.205

> v1<-X[,1]

> v1

[1] -0.264 0.026 -1.408

In this case, although the operation on X is

*identical*, v1

*does not*inherit the row names of X as its names! Tricky.