*whether*or not two matrices (the VCV matrices for different branches in the tree) differ, but not at all

*how*they differ. For a while I have been meaning to implement a version of this method in which various hypotheses about matrix similarity can be tested, but I have not yet had the time to dedicate to this fairly difficult problem.

In the interim, and on a user request, I have coded a two-trait version of the function which fits four models: shared correlation and variances (rates); different variances, but shared correlation; different correlation, but shared variances; and, finally, no common structure. The function is called evolvcv.lite, and I have posted a version to my R phylogenetics page (code here).

Just to show the reader how this method can be used to reveal new things about the evolutionary process (compared to evol.vcv, let's consider data for two characters simulated without correlation, but with a rate that depends on the state for a discretely valued trait. For example:

> require(phytools); require(geiger)

> # simulate atree

> tree<-drop.tip(birthdeath.tree(b=1,d=0,taxa.stop=101),"101")

> tree<-rescaleTree(tree,1)

> # simulate a random character history

> Q<-matrix(c(-1,1,1,-1),2,2)

> mtree<-sim.history(tree,Q)

> plotSimmap(mtree,ftype="off") # plot

Now let's simulate two traits evolving on the tree. Trait 1 has a big rate difference between the black branches (slow) and the red branches; whereas the rate difference is much smaller for trait 2. In addition, traits 1 & 2 are uncorrelated on both red and black branches.

> sig1<-c(1,10); names(sig1)<-c(1,2)

> sig2<-c(1,3); names(sig2)<-c(1,2)

> X<-cbind(sim.rates(mtree,sig1),sim.rates(mtree,sig2))

With these data in hand, first we fit the simple all-or-nothing evol.vcv() models:

> res1<-evol.vcv(mtree,X)

> res1

$R.single

[,1] [,2]

[1,] 4.47157442 -0.07452855

[2,] -0.07452855 1.37747715

$logL1

[1] -225.8994

$k1

[1] 5

$R.multiple

, , 1

[,1] [,2]

[1,] 1.0015801 -0.1486356

[2,] -0.1486356 0.8584742

, , 2

[,1] [,2]

[1,] 11.7551191 -0.0673687

[2,] -0.0673687 2.6448004

$logL.multiple

[1] -187.3565

$k2

[1] 8

$P.chisq

[1] 1.294187e-16

$convergence

[1] "Optimization has converged."

Obviously, these results reveal a very strong difference between the matrices in different parts of the tree. However, what they don't show is that the evolutionary correlation is conserved and that the two rate model fits so much better because the rates differ, not the correlation. Now, for comparison, let's load the function evolvcv.lite() and analyze the four models that it implements:

> source("evolvcv.lite.R")

> res2<-evolvcv.lite(mtree,X)

> res2

$model1

$model1$description

[1] "common rates, common correlation"

$model1$R

[,1] [,2]

[1,] 4.47161644 -0.07452615

[2,] -0.07452615 1.37747050

$model1$logLik

[1] -225.8994

$model1$k

[1] 4

$model1$AIC

[1] 459.7988

$model2

$model2$description

[1] "different rates, common correlation"

$model2$R1

[,1] [,2]

[1,] 0.9913016 -0.0969871

[2,] -0.0969871 0.8498546

$model2$R2

[,1] [,2]

[1,] 11.9360252 -0.5993744

[2,] -0.5993744 2.6956236

$model2$logLik

[1] -187.585

$model2$k

[1] 6

$model2$AIC

[1] 387.1699

$model3

$model3$description

[1] "common rates, different correlation"

$model3$R1

[,1] [,2]

[1,] 4.710074 -1.697695

[2,] -1.697695 1.644100

$model3$R2

[,1] [,2]

[1,] 4.7100741 0.1904557

[2,] 0.1904557 1.6441005

$model3$logLik

[1] -222.3782

$model3$k

[1] 5

$model3$AIC

[1] 454.7564

$model4

$model4$description

[1] "no common structure"

$model4$R1

[,1] [,2]

[1,] 1.0014259 -0.1489618

[2,] -0.1489618 0.8585239

$model4$R2

[,1] [,2]

[1,] 11.75241519 -0.06562557

[2,] -0.06562557 2.64366677

$model4$logLik

[1] -187.3565

$model4$k

[1] 7

$model4$AIC

[1] 388.7131

So here we can see that the best fitting model (based on AIC - we could also use LR tests for nested models) is the model with different evolutionary rates, but a common evolutionary correlation (model 2), not, in fact, the no common structure model (model 4). Neat.

Nice job, Liam. The function seems to be working well. I'll keep you posted on my progress with it.

ReplyDeleteCool. Hopefully it gives you some interesting results. I'm going to make one addition that is to return the values for convergence from optim(). This way you can be assured of whether or not the likelihood optimizations converged. (At present, this is hidden from the user.) - Liam

ReplyDelete