Calculating R-square values for two LME models with R

Calculating Rvalues cannot be done immediately (as far as we’ve looked) in R for LME models. Thanks to Corine Horsch we followed a method found in this book and created two functions to immediately calculate R2, which you can see below:


getRSquare <- function(Model0, Model1){

  #Variances
  sigma0SQ <-Model0$sigma^2
  tau0SQ <- diag(sqrt(getVarCov(Model0)))^2
  sigma1SQ <-Model1$sigma^2
  tau1SQ <- diag(sqrt(getVarCov(Model1)))^2

  Rsquare <- 1 - ((sigma1SQ+tau1SQ)/(sigma0SQ+tau0SQ))

  return (Rsquare)

}

In models with nested random effects, the function getVarCov() may not work. We then have to resort to a sneakier way of getting the tau value, using the more generic varCorr() function:


vc.Model0 <- VarCorr(Model0)
tau0SQ <- as.numeric(vc.Model0[4,2])^2
vc.Model1 <- VarCorr(Model1)
tau1SQ <- as.numeric(vc.Model1[4,2])^2

Note here that we are accessing a table at a certain cell: [4,2]. If you print vc.Model0 or vc.Model1 out, the cell at the 4th row/2nd column corresponds exactly to the tau value relevant to the inner nested random effect. To get the outer one for example, you should use cell [2,2]. Experiment with this table to understand better how to access the right cell for models with several nested random effects.

Advertisements

Mixed models with nested random effects in R

Once again, SPSS failed me miserably while trying to run my mixed, multilevel model below:

nested model

The first two levels, BSO(i.e. day care center) and pp (participant) are nested random effects. As the model shows, each participant’s “emotional” or behavioral code (e.g. engaged, excited, bored, …) was sampled several times, during two sessions (once for each of the app’s two versions). SPSS failed to allow me to build this model properly, mostly failing to comprehend that random effects have more than one level.

So I turned to R and found several, very interesting (though maybe a bit long to read) tutorials using the package lme4, so credit goes to Bodo and Ben for their extensive effort into simplifying what could be a very complex process.

The way it worked for me is below, described in an overly simplified way.

require(xlsx) #to read excel sheets
require(lme4) #the package
Behavioral <- read.xlsx("behavioral", 1) #the data set

#certain things we need to ensure are treated as factors
#default is covariants
Behavioral$version = as.factor(Behavioral$version)
Behavioral$pp = as.factor(Behavioral$pp)

#first we construct the null model
#emotion type as response with no fixed factors
#and a nested BSO and pp as random factors
model.null <-lmer(emo.type ~ 1 + 1|BSO/pp, data = Behavioral, REML=FALSE)

#we then construct our model, adding version as fixed factor
model <-lmer(emo.type ~ version + 1|BSO/pp, data = Behavioral, REML=FALSE)

#see summaries of the models here
summary (model.null)
summary (model)

#to obtain the p-value, i.e. to see if adding
#the fixed factor made a significant difference
anova(model.null,model)

#and finally
citation("lme4")
#will generate a bibtex entry so you cite the authors of the package!