Friday, July 15, 2011

More robust version of brownie.lite()

I just posted a slightly more robust version of brownie.lite(), a function that implements the method of O'Meara et al. (2006).

In the previous versions, the "stats" general purpose optimizer, optim(), would occasionally (say, 1 in 100 times) falsely converge - that is, converge for the multi-rate model to a region of parameter space that is far from the true optimum and had a very low likelihood - but nonetheless believe that it had found the maximum of the likelihood surface. Oops! This is not a huge problem because when this happens it usually results in a multi-rate model likelihood that is much worse than the likelihood of the single-rate model. (By definition, this is impossible since the models are nested.) The difference in the log-likelihoods between the simpler one-rate model and the more complex multi-rate model in this case can be huge (for instance -177.58 compared to -4776.32 in one example), and thus we can easily see that we have had a problem with optimizing the multi-rate model.

To avoid this kind of gross non-convergence, I have now added a while() loop in which I check to ensure that the multi-rate model has at least as good a likelihood as the one-rate model, and re-optimize the model with different random starting values while it does not:

while(logL2 < logL1){
  message("False convergence on first try; trying again with new starting values.")
  model2<-optim(starting*2*runif(n=length(starting)), fn=likelihood.multiple,y=x,C=C2,control=list(maxit=maxit), hessian=TRUE,method="L-BFGS-B",lower=lower)
  logL2<--model2$value
}


Anyway, this seems to help. Other non-convergence can probably be fixed by increasing maxit in the function arguments, e.g.:

> brownie.lite(...,maxit=4000) # default is 2000

Good luck!

4 comments:

  1. Hi Liam,

    I am trying to run brownie.lite on my dataset with a tree of 150 tips and 3 character states. With the raw values of my continuous data, I never get convergence even when I increase the iteration from 2000 to 5000000, however, if I multiply by 100 my continuous variable, then it gets convergence very fast with just 2000 iterations.

    I guess there is nothing wrong in multiplying my continuous variables by a constant as this should preserve the relative values of my rates (am I right?), however why this particular detail makes converge the analysis?

    thanks a lot!


    Joan.

    ReplyDelete
  2. Hi Joan.

    It depends what the error is, but in general when the rates are very small (which they will tend to be either if the scale of the traits is very small or the tree is very long) then the function, and optimization generally, will run into issues of numerical precision. That is to say, some of the values computed by the function will be smaller than the smallest value that can be stored by R.

    After multiplying the data by constant k, to get the estimated rates back on the original scale you just divide by 1/k^2.

    Another possible error could result if the rate for the lowest rate is less than 10,000 times lower than the MLE of a single, constant rate. However, this shouldn't be resolved by multiplying the data by a constant.

    If you send me your data, I would be happy to try and diagnose further.

    - Liam

    ReplyDelete
  3. just sent you a mail with the data.

    thank you very much for your help!


    Joan.

    ReplyDelete

Note: due to the very large amount of spam, all comments are now automatically submitted for moderation.