It has been a while since I last posted some stats related material. Today I am getting back to this amazing topic and focus on how we can compare factor structures across cultural samples. I have done this previously with SPSS. Today I am focusing on R, which is way cooler.
In cross-cultural psychology, we often use factor analysis (or principal component analysis) to examine the factor structure of an instrument. But how can we tell whether the factors that we find are comparable? And how similar are they to each other? In order to do this, we need to make the factor structures maximally comparable with each other and then get an overall estimate of factor similarity. This is what Procrustean Rotation and indices such as Tucker's Phi are all about.
You may ask: Why do we need rotations which such weird Greek mythological names (if you wonder about the history of the name, look up the mighty evil rogue Procrustes on google)? The problem is that simply speaking any factor rotation is arbitrary and there are infinite possible solutions that can be mathematically fitted to any factor structure. Which means that there is a good chance that sample specific fluctuations will make factors look quite different. Apparently dissimilar factor structures might be more similar than we think; procrustean rotation is necessary to judge how similar they are.
Hence, I will cover the magic of how to do this in R, a free and awesome statistics program. Assuming that you are new to R, I will cover the basics of how to set your path and get your data in. If you know what you are doing, you can skip forward to the latter section.
I need to type this command:
setwd("F:\\")
If I had saved all the data on my dropbox folder in a folder called 'Stats' that is in my 'PDF' folder, then I would need to type this command:
setwd("C:\\Users\\Ron\\Dropbox\\PDF\\Stats")
Two important points:
a) for some strange reason you need double \\ to set your directory paths with windows. You could also use / instead of \\ (e.g., setwd("C:/Users/Ron/Dropbox/PDF/Stats")). This is just to confuse you... But R is still awesome.
b) make sure that there are no spaces in any of your file or directory paths. R does not like it and will throw a tantrum if you have a space somewhere.
The most convenient way to read data into R is using .csv files. Any programme like SPSS or Excel will allow you to save your data as a .csv file.
You need to type:
ocb=read.csv("ocb_efa.csv", header=TRUE)
R is an object oriented language, which means we will constantly create objects by calling on functions: object <- function. This may seem weird at first, but will allow you to do lots of cool stuff in a very efficient way.
I am using a data that tested an organizational citizenship behavior scale, so I am calling my object that contains the data 'ocb'. Just as a bit of background, I am using data from Fischer and Smith (2006). They measured self-reported work behaviour in British and East German samples, which they called extra-role behaviour. Extra-role behavior is pretty much the same as citizenship behaviour, voluntary and discretationary behaviour that goes beyond what is expected of employees, but helps the larger organization to survive and prosper. These items were supposed to measure a more passive component (factor 1) and a more proactive component (factor 2). We will need this info on the expected factors below...
The command header=TRUE (or you could make it sure and just type T) tells R that the variable names are included.
R does not like missing data. We will need to define which values are missing. I previously coded all missing data as -999 in SPSS or EXCEl. Now I have to declare that these annoying -999s should be treated as missing values.
If you type:
summary(ocb)
You will see that the minimum value is -999. The simplest and straightforward option is to define the missing values is to write this short command that converts all these offending values into NA - the R form of missing data.
ocb[ocb==-999]<-NA
Note the square brackets and double ==. If you want to treat only a selected variable, you could write:
ocb$ocb1[ocb$ocb1==-999] <- NA
This tells R that you want only the the first variable in the dataframe ocb to be treated in this way.
To check that all worked well, type:
summary(ocb)
You should see something like this:
If all went well, now your minimum and maximum values are within the bounds of your original data and you have a row of NA's a the bottom of each variable column.
As you can see, we have a variable called country with 1's and 2's. This is not that useful, because last time I checked, these are not good names for countries and might be a bit confusing.
The best option is to convert this variable in what is called a factor in R (don't confuse it with factor analysis). Basically, it becomes a dummy variable and we can give it labels. In my case, I have data from British and German employees, so I am using UK and German as labels.
You can type:
ocb$country<-factor(ocb$country, #specifies the variable to be recoded
levels = c(1,2), #specifies the numeric values
labels = c("UK", "German")) #specifies the labels assigned to each numeric value
The output looks like this:
The first item loads much more clearly on factor 2 in this German data set compared to the British data set. But what can say about this difference? We can't really compare to the two factor results, because there might be arbitrary changes due to sample fluctuations or other funny jazz (this is a highly technical term). Now we get to the crux of this whole issue, because we need to do Procrustean rotation. Procrustean rotation (have you looked up Procrustes yet?) does what the name says, it rotates and fits one solution to the other, making them directly comparable.
Before we get there, take a deep breath and have a look at this picture...
Feeling more relaxed and calmer now? Let's move on to the real stuff!
If we now call the object (just type the name of the object), we should see something like this:
In cross-cultural psychology, we often use factor analysis (or principal component analysis) to examine the factor structure of an instrument. But how can we tell whether the factors that we find are comparable? And how similar are they to each other? In order to do this, we need to make the factor structures maximally comparable with each other and then get an overall estimate of factor similarity. This is what Procrustean Rotation and indices such as Tucker's Phi are all about.
You may ask: Why do we need rotations which such weird Greek mythological names (if you wonder about the history of the name, look up the mighty evil rogue Procrustes on google)? The problem is that simply speaking any factor rotation is arbitrary and there are infinite possible solutions that can be mathematically fitted to any factor structure. Which means that there is a good chance that sample specific fluctuations will make factors look quite different. Apparently dissimilar factor structures might be more similar than we think; procrustean rotation is necessary to judge how similar they are.
Hence, I will cover the magic of how to do this in R, a free and awesome statistics program. Assuming that you are new to R, I will cover the basics of how to set your path and get your data in. If you know what you are doing, you can skip forward to the latter section.
Step 1. Set your working directory
You need to set a working directory. This step is important because it will allow you to call your data file later on repeatedly without listing the whole path of where it is saved. For example, I saved the file that I am working with on my USB drive.I need to type this command:
setwd("F:\\")
If I had saved all the data on my dropbox folder in a folder called 'Stats' that is in my 'PDF' folder, then I would need to type this command:
setwd("C:\\Users\\Ron\\Dropbox\\PDF\\Stats")
Two important points:
a) for some strange reason you need double \\ to set your directory paths with windows. You could also use / instead of \\ (e.g., setwd("C:/Users/Ron/Dropbox/PDF/Stats")). This is just to confuse you... But R is still awesome.
b) make sure that there are no spaces in any of your file or directory paths. R does not like it and will throw a tantrum if you have a space somewhere.
Step 2. Read your data into R
The most convenient way to read data into R is using .csv files. Any programme like SPSS or Excel will allow you to save your data as a .csv file.
You need to type:
ocb=read.csv("ocb_efa.csv", header=TRUE)
R is an object oriented language, which means we will constantly create objects by calling on functions: object <- function. This may seem weird at first, but will allow you to do lots of cool stuff in a very efficient way.
I am using a data that tested an organizational citizenship behavior scale, so I am calling my object that contains the data 'ocb'. Just as a bit of background, I am using data from Fischer and Smith (2006). They measured self-reported work behaviour in British and East German samples, which they called extra-role behaviour. Extra-role behavior is pretty much the same as citizenship behaviour, voluntary and discretationary behaviour that goes beyond what is expected of employees, but helps the larger organization to survive and prosper. These items were supposed to measure a more passive component (factor 1) and a more proactive component (factor 2). We will need this info on the expected factors below...
The command header=TRUE (or you could make it sure and just type T) tells R that the variable names are included.
Step 3. Preparing your data (dealing with missing data, checking your data, etc.)
R does not like missing data. We will need to define which values are missing. I previously coded all missing data as -999 in SPSS or EXCEl. Now I have to declare that these annoying -999s should be treated as missing values.
If you type:
summary(ocb)
You will see that the minimum value is -999. The simplest and straightforward option is to define the missing values is to write this short command that converts all these offending values into NA - the R form of missing data.
ocb[ocb==-999]<-NA
Note the square brackets and double ==. If you want to treat only a selected variable, you could write:
ocb$ocb1[ocb$ocb1==-999] <- NA
This tells R that you want only the the first variable in the dataframe ocb to be treated in this way.
To check that all worked well, type:
summary(ocb)
You should see something like this:
If all went well, now your minimum and maximum values are within the bounds of your original data and you have a row of NA's a the bottom of each variable column.
The best option is to convert this variable in what is called a factor in R (don't confuse it with factor analysis). Basically, it becomes a dummy variable and we can give it labels. In my case, I have data from British and German employees, so I am using UK and German as labels.
You can type:
ocb$country<-factor(ocb$country, #specifies the variable to be recoded
levels = c(1,2), #specifies the numeric values
labels = c("UK", "German")) #specifies the labels assigned to each numeric value
If you wonder, the # allows me to add annotations to each command line, that tell me (and you) what is going on, but R is ignoring these sections.
If you type, summary(ocb) again, you should now see that there 130 responses from the UK and 184 from Germany.
There is one more thing we need to do. In our analyses, we want to compare the factor analysis results of the two samples. Therefore, we need to create two data sets for each sample that include only the variables that we need for our factor analysis. This can be achieved with the subset command, which creates a new object with only the data that we need for each analysis. At the same time, we can also use this command to select only the relevant variables for our factor analysis.
To create the UK data set, you can type:
ocb.UK<-subset(ocb, #creates a new data frame using the original ocb data frame
country=="UK", #this is the variable that is used for subsetting, note the double ==
select=c(2:10)) #we only need the continuous variables which were in column 2 to 10
To see whether it worked, type:
summary(ocb.UK) #check that it worked
nrow(ocb.UK) #check that it worked, this command will give you the number of rows
Then repeat the procedure to create the German data set:
ocb.German<-subset(ocb,
country=="German",
select=c(-1)) #if you wonder, this is an alternative way of selecting the variables, by dropping the first column which had the country dummy factor
To check, you know the drill (summary or nrow).
Step 4. Installing and loading the analysis packages for your analysis
R is a very powerful tool because it is constantly expanding. Researchers from around the world are uploading tools and packages that allow you to run fancy new stats all the time. However, the base installation of R does not include them. So we need to tell R which packages we want to use.
For the type of measurement invariance tests that I am talking about today, we will need these two: psych (written by William Revelle, an amazing package, check out some of the awesome stuff can do with this package here) and GPArotation.
Write this code to download and install the packages on your machine:
install.packages(c("psych", "GPArotation"))
Make sure you have good internet connectivity and you are not blocked by an institutional firewall. I had some problems recently trying to download R packages when accessing it from a university campus with a strong firewall.
Once all packages are downloaded, you need to call them before you can run any analyses:
library("psych")
library("GPArotation")
Important: You need to call these packages each time that you want to run some analyses, if you have restarted R or RStudio. Now we should be ready to start our analyses.
Step 5. Run the analysis in each sample
I have used the name factor analysis so far. Technically, I am going to use principal component analysis (PCA). There is a lot of debate whether factor analysis or principal component analysis are better... I touched upon this in class, but will not repeat it here. Let's just stick with PCA for the time being and be happy. I will also continue to use the term 'factors', even though this is factually incorrect (they are principal components) and I am likely to burn in statistical hell. I am happy to brave this risk...
To run the PCA, we need to type a short command line. Let's break it down. pca_2f.uk is the name that I gave the new object that R will create. The name is pretty much up to you, I called it pca (because I am running a PCA) with 2 factors (hence 2f) based on the British data (voila, this is what uk stands for). The command 'principal' tells R what to do: run a principal component analysis. After the open brackets, I first specify the data object (ocb.UK), then how many factors I want to extract (nfactors=2), followed by the type of rotation (I decided to go with varimax rotation, which is a form of orthogonal rotation that assumes independence of factors). So this is what I write:
pca_2f.uk<-principal(ocb.UK,
nfactors=2,
rotate="varimax")
If you run it, nothing will happen. We just created an object that contains the PCA results. To actually see it, we can either call all the output by typing:
If you run it, nothing will happen. We just created an object that contains the PCA results. To actually see it, we can either call all the output by typing:
pca_2f.uk
Or we could sort the factor loadings by size and suppress small factor loadings (for example, factor loadings smaller than .3). To get this, write:
Now you should see some output like this:
As you can see, the first item loaded on both factors. However, overall there seems to be a pretty neat two-factor structure.
Now you need to do the same thing for the German data set. This is not rocket science and I hope you would have come up with the same code like this:
Or we could sort the factor loadings by size and suppress small factor loadings (for example, factor loadings smaller than .3). To get this, write:
print.psych(pca_2f.uk, cut=0.3, sort = T)
Now you should see some output like this:
As you can see, the first item loaded on both factors. However, overall there seems to be a pretty neat two-factor structure.
Now you need to do the same thing for the German data set. This is not rocket science and I hope you would have come up with the same code like this:
pca_2f.german<-principal(ocb.German,
nfactors=2,
rotate="varimax")
print.psych(pca_2f.german, cut=0.3, sort = T)
The output looks like this:
The first item loads much more clearly on factor 2 in this German data set compared to the British data set. But what can say about this difference? We can't really compare to the two factor results, because there might be arbitrary changes due to sample fluctuations or other funny jazz (this is a highly technical term). Now we get to the crux of this whole issue, because we need to do Procrustean rotation. Procrustean rotation (have you looked up Procrustes yet?) does what the name says, it rotates and fits one solution to the other, making them directly comparable.
Before we get there, take a deep breath and have a look at this picture...
Feeling more relaxed and calmer now? Let's move on to the real stuff!
Step 5. Run the Procrustean rotation
For those of you who have done the procrustean rotation stuff in SPSS (for a reminder, have a look here), you might have braced yourself for a massive typing exercise with lots of random error messages and annoying missing commas, semi-colons and winged brackets. Fear not - R is making it much easier.
To run the actual procrustean rotation, we need to type one little command line. To break it down again, we create a new object that contains our rotated factor loadings. I called it 'pca2.uk.rotated'. We tell R what to do (run a Target Rotation... hence, called 'TargetQ'), specify what factor loadings we want to rotate and what we want it to rotate it to - our target. I used the German sample as the target. This is a pretty arbitrary choice, but I decided to use it because a) the German sample is larger and b) the German sample had a slightly cleaner initial structure.
Here is the command:
pca2.uk.rotated<-TargetQ(pca_2f.uk$loadings, Target=list(pca_2f.german$loadings))
If we now call the object (just type the name of the object), we should see something like this:
The first first item still does show up as loading on both factors, but the loading on the first factor is somewhat reduced. We could now start a bit of a tea leaf reading exercise and look at all the little changes that have happened after rotation. This can be informative and if you have your own data sets, this is probably a good thing to do. Yet, these impressions do not allow us to get a sense of how statistically similar the two factor solutions are. Do these differences matter?
Hence, the final step for today... We need to calculate the overall similarity.
Step 6. Compute Factor Congruence Coefficients
There are a number of different ways to calculate factor congruence or factor similarity. The most common one is Tucker's Phi. You can read up more about it in a chapter that I have written together with Johnny Fontaine. Send me a message if you want a copy.
To get Tucker's Phi, we again have to write a single command line. The command is simple: 'factor.congruence' and all we need to specify is which loadings from what analyses we want to analyze. In our case, we want to compare the original German factor loadings with the procrustean rotated British loadings. Hence, we write:
factor.congruence(pca2.uk.rotated$loadings,pca_2f.german$loadings)
We will see a 2 x 2 matrix, which has Tucker's Phi on the diagonal. As you should see, the similarity for factor 1 is .94 and for factor 2 is .97. If you compare it with the standards that we discuss in the book chapter, this is pretty good similarity. The small changes that we see across the two samples do not matter that much.
If you want another indicator, we could compute the correlation between the two factor structures. This again is relatively straightforward. Without creating a new object, we could just type (note that we use the same structure as for the factor.congruence statement):
cor(pca2.uk.rotated$loadings,pca_2f.german$loadings)
The correlation matrix shows us on the diagonal that the correlation for factor 1 is .87 and for factor 2 is .93. Therefore, the correlation coefficient suggests that factor 2 is pretty similar. However, factor 1 is not doing that great. Maybe item 1 is a big dodgy after all.
As we discuss in the chapter, it can be useful to compare the different indices. If they agree - you are sweat and you can happily go your way comparing the factor structures. If they diverge (as they do a wee bit in this case), you may want to explore further. In our case, it might make sense to remove the first item and redo the analyses. If we do this and re-run all the steps after excluding ocb1 (see the subsetting command at step 3), we will find the two structures are now beautifully similar. Nearly like identical twins... Who would have thought that of ze Germans and ze Brits...
I hope you have enjoyed this little excursion into R and procrustean rotation. I am a big fan of the capabilities of R and what you can do with it for cross-cultural analyses. I hope I got you inspired too.
Any questions or comments, please get in touch and comment :)
Now... rotate and relax :)